| Name of the Table | Source |
|---|---|
| D1_1_SDG | dashboards.sdgindex.org |
| D2_2_Unemployment_rate | ilo.org |
| D3_0_GDP_per_capita | data.worldbank.org |
| D3_1_Military_expenditure_percent_GDP | data.worldbank.org |
| D3_2_Military_expenditure_percent_gov_exp | data.worldbank.org |
| D4_0_Internet_usage | ourworldindata.org |
| D5_0_Human_freedom_index | cato.org |
| D6_0_Disaters | kaggle.com |
| D7_0_COVID | github.com |
| D8_0_Conflicts | datacatalog.worldbank.org |
Comparative Analysis of SDG Implementation Evolution Worldwide
1 Introduction
1.1 Overview and Motivation
The adoption of the SDGs by the United Nation in 2015 marked a significant global commitment to address pressing issues such as poverty, inequality, climate, change, and more. The fact that these goals were unanimously adopted by 193 member states underscores their importance. This prompted us to ask ourselves, can we evaluate the progress? What has really been done so far? Although the SDGs have attracted considerable attention and backing, it is essential to evaluate the events preceding and following their implementation. Understanding the actions taken and progress made is essential in determining if these global commitments are resulting in tangible enhancements to individuals’ lives. By examining the evolution of all countries and their respective contributions towards achieving the SDGs, we can develop a comprehensive understanding of collective efforts and identify potential disparities or gaps.
1.3 Research questions
Focus on factors: What can explain the state of the countries regarding sustainable development? (we will analyse different factors: scores from the human freedom index, GDP per capita, military expenditures in % of GDP/government expenditure, unemployment rate, internet usage). See data description for more precise information about the factors.
Focus on relationship between SDGs: How are the different SDGs linked? (We want to see if some SDGs are linked in the fact that a high score on one implies a high score on the other, and thus if we can make groups of SDGs that are comparable in that way).
Focus on time: How has the adoption of the SDGs in 2015 influenced the achievement of SDGs? (we want to compare the achievement (SDG scores: there are scores calculated even before the adoption) of the different countries before and after 2015 to see if the adoption of SDG gave a real “push” to sustainable development)
Focus on events: Is the evolution in sustainable development influenced by uncontrollable events, such as economic crisis, health crises and natural disasters? (We will analyse the impact of the COVID, natural disasters and conflicts (# deaths, damages, etc.) on the SDG scores). See data description for more precise information about how the impact of these events are materialized into data.
2 Data
2.1 Sources
We are collecting our Data from the sustainability development report (UN), the international labour organization (ILOSTAT), the World Bank, Our world in data, the CATO institute, one from Kaggle (disasters: we couldn’t find relevant accessible information from somewhere else) and GitHub. We found different datasets containing useful information in relation with the SDGs. The details about these data and the links are presented in the next section. Utilizing the kableExtra package, we provide a comprehensive list and corresponding links to our sources, as outlined below:
2.2 Description
During the wrangling process, we pre-cleaned the different datasets and then merged them with our main table (D1_1_SDG) matching them based on the country code, and the year. The tables below show all the variables present in our 9 databases. We will then merge them to have our final table for the analysis.
2.2.1 Our databases
Sustainable Development Goals database (D1_1_SDG)
The Sustainable Development Goals (SDGs) are a universal set of 17 interlinked goals that were adopted by the United Nations in 2015 as part of the 2030 Agenda for Sustainable Development. These goals provide a shared blueprint for peace and prosperity for people and the planet, now and into the future.
Our primary database focuses on the Sustainable Development Goals (SDG). Below is a table summarizing the key variables included:
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| overallscore | Overall score on all 17 SDGs (the score are % of achievement of the goals determined by the UN based on several indicators) |
| goal1:goal17 | Score on each SDG except SDG 14 (16 variables) |
| population | Population of the country |
Unemployment rate database (D2_2_Unemployment_rate)
This database give us comprehensive data on the unemployment rates for each country from 2000 to 2022. Originally, it included categories based on various age groups. However, for simplicity and coherence, the database has been streamlined to focus exclusively on the unemployment rates of individuals aged 15 years and older.
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| unemployment.rate | Unemployment rate (% of the population 15 years old and older) |
GDP per capita database (D3_0_GDP_per_capita)
This database offers detailed information on the GDP per capita in dollars for various countries, covering the period from 2000 to 2022. It is designed to provide insights into the economic performance of each country over these years, measured through the lens of per capita GDP.
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| GDPpercapita | GDP per capita |
Proportion of the GDP dedicated to Military expenditures database (D3_1_Military_expenditure_percent_GDP)
This database provides an insightful view of the proportion of their respective GDPs that countries have allocated to military expenditures. It covers the period from 2000 to 2022.
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| MilitaryExpenditurePercentGDP | Military expenditures in percentage of GDP |
Internet usage database (D4_0_Internet_usage)
This database provides information on the percentage of the population that uses the internet in each country. It covers the period from 2000 to 2022.
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| internet.usage | Internet usage (% of the population) |
Human freedom index database (D5_0_Human_freedom_index)
This database provides information on the Human Freedom Index (HFI) for each country. The HFI is a composite index that measures the degree to which people are free to enjoy important rights and freedoms. It covers the period from 2000 to 2022.
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| region | Part of the world, group of countries (e.g. Eastern Europe, Dub-Saharan Africa, South Asia, etc.) |
| pf_law | Rule of law, mean score of: Procedural justice, Civil, justice, Criminal justice, Rule of law (V-Dem) |
| pf_security | Security and safety, mean score of: Homicide, Disappearances conflicts, terrorism |
| pf_movement | Freedom of movement (V-Dem), Freedom of movement (CLD) |
| pf_religion | Freedom of religion, Religious organization, repression |
| pf_assembly | Civil society entry and exit, Freedom of assembly, Freedom to form/run political parties, Civil society repression |
| pf_expression | Direct attacks on the press, Media and expression (V-Dem), Media and expression (Freedom House), Media and expression (BTI), Media and expression (CLD) |
| pf_identity | Same-sex relationships, Divorce, Inheritance rights, Female genital mutilation |
| ef_gouvernment | Government consumption, Transfers and subsidies, Government investment, Top marginal tax rate, State ownership of assets |
| ef_legal | Judicial independence, Impartial courts, Protection of property rights, Military interference Integrity of the legal system Legal enforcementof contracts, Regulatory costs, Reliability of police |
| ef_money | Money growth, Standard deviation of inflation, Inflation: Most recent year, Freedom to own foreign currency |
| ef_trade | Tariffs, Regulatory trade barriers, Black-market exchange rates, Movement of capital and people |
| ef_regulation | Credit market regulations, Labor market regulations, Business regulations |
Disaster list database (D6_0_Disaters)
This database provides information on the number of deaths, injured, affected and homeless people, as well as the total number of affected people and the total of infrastructure damages caused by disasters in each country. It covers the period from 2000 to 2021.
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2021) |
| continent | Continents touched by the disasters such as floods, ouragan |
| total_deaths | Number of deaths caused by disasters |
| no_injured | Number of injured caused by disasters |
| no_affected | Number of affected caused by disasters |
| no_homeless | Number of homeless caused by disasters |
| total_affected | Total number of affected caused by disasters |
| total_damages | Total of infrastructure damages |
COVID database (D7_0_COVID)
This database provides information on the number of people dead due to COVID, the number of COVID cases and the Government Response Stringency Index in each country. It covers the period from 2020 to 2022.
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2020-2022) |
| deaths_per_million | Number of people dead due to COVID |
| cases_per_million | Number of COVID cases |
| stringency | Government Response Stringency Index: composite measure based on 9 response indicators including school closures, workplace closures, and trave |
Conflicts database (D8_0_Conflicts)
This database provides information on the number of deaths, the number of people affected and the maximum intensity of conflicts in each country. It covers the period from 2000 to 2022.
| Variable Name | Explanation |
|---|---|
| code | Country code (ISO) |
| country | Country name |
| year | Year of the observation (2000-2022) |
| ongoing | Variable coded 1 for more than 25 deaths in intrastate conflict and 0 otherwise according to UCDP/PRIO Armed Conflict Dataset 17.1. |
| sum_deaths | Best estimate of deaths in all categories of violence (non-state, one-sided and state-based) recorded by the Uppsala Conflict Data Program in the country based on the UCDP GED dataset (unpublished 2016 data). The location of these events is used for estimating the extent of violence. |
| pop_affected | Share of population affected by violence in percentage (0 to 100) measured as described above based on population data from CIESIN, the PRIO-GRID structure as well as UCDP GED. |
| area_affected | Area affected by conflict |
| maxintensity | Two different intensity levels are coded: minor armed conflicts (1) and wars (2), Takes the max intensity of conflict in the country so that it is coded 2 if there is at least one war (>=1000 deaths in intrastate conflict) during the year. Data from UCDP/PRIO Armed Conflict Dataset 17.1. |
2.3 Wrangling/cleaning
2.3.1 Pre-cleaning
To deal with the large scale of the datasets, we pre-cleaned each dataset before merging them together. This streamlined the process, simplifying the cleaning of the final, combined dataset. The treatment of missing values will be taken care of after merging our datasets.
2.3.1.1 Dataset on SDG
This is our main dataset that we clean in order to keep the columns containing the following information: country name, country code, year, population, overall score and the SDGs scores.
We start by importing the data and converting it into a DataFrame. Next, we rename the columns and convert the scores into numeric variables.
Code
#### D1_0_SDG importation ####
#Importing the data from the csv file
D1_0_SDG <- read.csv(here("scripts","data","SDG.csv"), sep = ";")
# Keeping only the firste 22 columns
D1_0_SDG <- D1_0_SDG[,1:22]
# Renaming the columns
colnames(D1_0_SDG) <- c("code", "country", "year", "population",
"overallscore", "goal1", "goal2", "goal3",
"goal4", "goal5", "goal6", "goal7", "goal8",
"goal9", "goal10", "goal11", "goal12",
"goal13", "goal14", "goal15", "goal16",
"goal17")
# Function to convert the overallscore column into numeric values
# and replace the "," by "."
D1_0_SDG[["overallscore"]] <-
as.double(gsub(",", ".", D1_0_SDG[["overallscore"]]))
# Function to convert the scores columns into numeric values
# and also to replace the "," by "."
makenumSDG <- function(D1_0_SDG) {
for (i in 1:17) {
varname <- paste("goal", i, sep = "")
D1_0_SDG[[varname]] <-
as.double(gsub(",", ".", D1_0_SDG[[varname]]))}
return(D1_0_SDG)}
# Applying the function to the 1_0_SDG dataset
D1_0_SDG <- makenumSDG(D1_0_SDG)We proceed by examining the missing values.
Code
#### D1_0_SDG missing values preparation and the graph ####
# Creation of an empty vector with the same length as our dataset
propmissing <- numeric(length(D1_0_SDG))
# For loop to get the proportion of missing values in each columns
for (i in 1:length(D1_0_SDG)){
proportion <- mean(is.na(D1_0_SDG[[i]]))
propmissing[i] <- proportion}
# Vector containing all columns names of our dataset
variable_names <- colnames(D1_0_SDG)
# prepare our data for plotting
prop_missing_data <- data.frame(variable = variable_names,
prop_missing = propmissing)
# Preparation of the graph labels
prop_missing_data$hover_text <-
paste("Variable: ",
prop_missing_data$variable,
"\nMissing percentage: ",
round((prop_missing_data$prop_missing)*100, 2),
"%", sep = "")
# Creation of the plot to see the proportion of missing values per column
gg_prop_missing <- ggplot(prop_missing_data,
aes(x = variable,
y = prop_missing,
text = hover_text)) +
geom_bar(stat = "identity",
fill = Fix_color,
aes(fill = prop_missing),
color = "black") +
labs(title = "NAs by columns in the main dataset",
x = "Variable",
y = "Proportion of Missing Values") +
theme(plot.title = element_text(size = 10, hjust = 0.5),
axis.title.x = element_text(size = 8),
axis.title.y = element_text(size = 8)) +
theme_minimal() +
coord_flip() +
guides(fill = FALSE)
# Convert ggplot object to plotly and remove the Modebar
plotly_prop_missing <- ggplotly(gg_prop_missing,
tooltip = "text") %>%
config(displayModeBar = FALSE)
# Print the plotly object
plotly_prop_missingObserving that the ‘population’ column contains numerous NAs, we investigate and discover that missing values are common. This come from observations representing regions and not countries. Therefore, we can safely exclude these observations.
Code
#### D1_0_SDG missing values in population ####
# Find the missing proportion of the population varibales across the regions
SDG0 <- D1_0_SDG %>%
group_by(code) %>%
select(population) %>%
summarize(NaPop = mean(is.na(population))) %>%
filter(NaPop != 0)
# plot the graph
ggplot(SDG0,
aes(x = code,
y = NaPop)) +
geom_bar(stat = "identity",
fill = Fix_color,
color = "black") +
labs(title = "Proportion of population information missing in the region observations",
x = "Region code",
y = "Proportion of NAs") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45,
hjust = 1),
plot.title = element_text(hjust = 0.5,
size = 10))
# Remove the lines where the 'code' start with an "_" (the region)
D1_0_SDG <- D1_0_SDG %>%
filter(!str_detect(code, "^_"))Now that we have eliminated all missing values in the ‘population’ variable, we observe that our dataframe contains information on 166 countries.
We now move to analysing the missing values for the SDGs and we find that NAs are only present in three SDG scores: 1, 10, and 14. Additionally, when a country has NAs, they occur across all years or not at all. Consequently, we decide to conduct further investigations on these three SDG scores to determine whether to include them in our analysis.
For goal 1, there are only 9.04% missing values in 15 different countries. Goal 1 being “End poverty”, we decide to keep it and only remove the countries with no information for the analysis.
Code
#### SDG2 missing values ####
SDG2 <- D1_0_SDG |>
group_by(code) |>
select(contains("goal")) |>
summarize(Na1 = mean(is.na(goal1))) |>
filter(Na1 != 0)
country_number <- length(unique(D1_0_SDG$country))
length(unique(SDG2$code))/country_number
#> [1] 0.0904For goal 10, there are only 10.2% missing values in 17 different countries. Goal 10 being “reduced inequalities”, we decide to keep it and only remove the countries with no information for the analysis.
Code
#### SDG3 missing values ####
SDG3 <- D1_0_SDG |>
group_by(code) |>
select(contains("goal")) |>
summarize(Na10 = mean(is.na(goal10))) |>
filter(Na10 != 0)
length(unique(SDG3$code))/country_number
#> [1] 0.102For goal 14, there are 24.1% missing values in 40 different countries. Goal 14 being “life under water”, we decide not to keep it, because other SDG such as “life on earth” and “clean water” already treat similar subjects.
Code
#### SDG4 missing values ####
SDG4 <- D1_0_SDG |>
group_by(code) |>
select(contains("goal")) |>
summarize(Na14 = mean(is.na(goal14))) |>
filter(Na14 != 0)
length(unique(SDG4$code))/country_number
#> [1] 0.241
D1_0_SDG <- D1_0_SDG %>%
select(-goal14)We will work with various datasets and merge them using the country code and year as key identifiers. To ensure accurate matching, we first verify that country names are encoded in UTF-8 format. Then, we standardize the names of the countries (requiring a custom match for Turkey) and the country codes, utilizing the countrycode library. Additionally, we compile a list of all country codes from the main database to filter the other datasets. Lastly, we complete the database to include all possible “country, year” combinations, ensuring the total number of rows remains unchanged.
Code
#### D1_0_SDG country code ####
D1_0_SDG$country <- stri_encode(D1_0_SDG$country, to = "UTF-8")
D1_0_SDG <- D1_0_SDG %>%
mutate(country = countrycode(country, "country.name", "country.name",
custom_match = c("T�rkiye"="Turkey")))
D1_0_SDG$code <- countrycode(
sourcevar = D1_0_SDG$code,
origin = "iso3c",
destination = "iso3c",
)
list_country <- c(unique(D1_0_SDG$code))
D1_0_SDG_country_list <- D1_0_SDG %>%
filter(code %in% list_country) %>%
select(code, country)
D1_0_SDG_country_list <- D1_0_SDG_country_list %>%
select(code, country) %>%
distinct()Finally, we complete the database to ensure there are no missing pairs of (year, code).
Here are the first few lines of the cleaned dataset on SDG achievement scores:
For this first dataset, we reduced the size from 4,140 observations across 120 variables to 3,818 observations for 21 variables.
As said, this is now our main dataset. All subsequent datasets will be merged with this dataset. Therefore, for all the following datasets, we will make sure that we only keep data for the same countries and years as in this dataset. We have a total of 166 countries and the years range from 2000 to 2022.
2.3.1.2 Dataset on Unemployment rate
In this dataset, the initial step involves importing the data. Next, we ensure that the names and codes of the countries are formatted in UTF-8, preventing any discrepancies due to mismatches in country names. Following this, we modify the column names and filter the data to include only the relevant countries and years, specifically the years 2000 to 2022, covering 166 countries from our primary dataset.
Code
#### D2_1_Unemployment_rate pre-cleaning ####
D2_1_Unemployment_rate <-
read.csv(here("scripts","data","UnemploymentRate.csv")) %>%
mutate(country = iconv(ref_area.label,
to = "UTF-8",
sub = "byte"),
country = countrycode(country,
"country.name",
"country.name"),
`unemployment rate` = obs_value / 100,
year = time,
age_category = classif1.label,
sex = sex.label) %>%
select(-ref_area.label, -time, -obs_value,
-classif1.label, -sex.label, -source.label,
-obs_status.label, -indicator.label) %>%
merge(D1_0_SDG_country_list[, c("country", "code")],
by = "country",
all.x = TRUE) %>%
filter(year >= 2000 & year <= 2022,
!str_detect(sex, fixed("Male")) & !str_detect(sex, fixed("Female")),
code %in% D1_0_SDG_country_list$code,
age_category == "Age (Youth, adults): 15+") %>%
select(code,
country,
year,
`unemployment rate`) %>%
distinct()Here are the first few lines of the cleaned dataset on Unemployment rate:
For this first dataset, we reduced the size from 82,800 observations across 8 variables to 3,812 observations for 5 variables.
2.3.1.3 Dataset on GDP military Expenditures
We have three different databases which contain information on each countries over the years. Each year represent one variable. We want to extract three variables for our analysis: GDP per capita, military expenditures in percentage of the GDP and military expenditures in percentage of government expenditures.
Code
#### GDP per capita pre-cleaning ####
GDPpercapita <-
read.csv(here("scripts","data","GDPpercapita.csv"),
sep = ";")
MilitaryExpenditurePercentGDP <-
read.csv(here("scripts","data","MilitaryExpenditurePercentGDP.csv"),
sep = ";")
MiliratyExpenditurePercentGovExp <-
read.csv(here("scripts","data","MiliratyExpenditurePercentGovExp.csv"),
sep = ";")After importing the data, we fill in the missing country codes using the column Indicator.Name, because we realized after some manipulations, that some of the country codes were false, but the next column contained the right ones.
Code
#### GDP per capita fill code ####
fill_code <- function(data){
data <- data %>%
mutate(Country.Code = ifelse(!grepl("^[A-Z]{3}$", Country.Code),
Indicator.Name, Country.Code))
}We create a set of functions that we will apply to each database. First, remove the variables that we don’t need, which are the years before 2000. Second, make sure that the values are numeric and rename the year variables (because they all had an “X” before year number). Third, transform the database from wide to long, in order to match the main database. Fourth, transform the year variable into an integer variable and rearrange and rename the columns to match the ones of the other databases. Then, we apply these transformations to the three databases.
Code
#### Useful functions ####
remove <- function(data){
years <- seq(1960, 1999)
removeyears <- paste("X", years, sep = "")
data <- data[, !(names(data) %in% c("Indicator.Name",
"Indicator.Code",
"X",
removeyears))]
}
makenum <- function(data) {
for (i in 2000:2022) {
year <- paste("X", i, sep = "")
data[[year]] <- as.numeric(data[[year]])
}
return(data)
}
renameyear <- function(data) {
for (i in 2000:2022) {
varname <- paste("X", i, sep = "")
names(data)[names(data) == varname] <- gsub("X", "", varname)
}
return(data)
}
wide2long <- function(data) {
data <- pivot_longer(data,
cols = -c("Country.Name",
"Country.Code"),
names_to = "year",
values_to = "data")
return(data)
}
yearint <- function(data) {
data$year <- as.integer(data$year)
return(data)
}
nameorder <- function(data) {
colnames(data) <- c("country",
"code",
"year",
"data")
data <- data %>% select(c("code",
"country",
"year",
"data"))
}
cleanwide2long <- function(data){
data <- fill_code(data)
data <- remove(data)
data <- makenum(data)
data <- renameyear(data)
data <- wide2long(data)
data <- yearint(data)
data <- nameorder(data)
}
GDPpercapita <-
cleanwide2long(GDPpercapita)
MilitaryExpenditurePercentGDP <-
cleanwide2long(MilitaryExpenditurePercentGDP)
MiliratyExpenditurePercentGovExp <-
cleanwide2long(MiliratyExpenditurePercentGovExp)We rename the colums with the main information, standardize the country code and remove the countries that are not in our main database. We see that all the 166 countries are there.
Code
#### GDP per capita renamed and standardized ####
GDPpercapita <- GDPpercapita %>%
rename(GDPpercapita = data)
MilitaryExpenditurePercentGDP <- MilitaryExpenditurePercentGDP %>%
rename(MilitaryExpenditurePercentGDP = data)
MiliratyExpenditurePercentGovExp <- MiliratyExpenditurePercentGovExp %>%
rename(MiliratyExpenditurePercentGovExp = data)
GDPpercapita$code <- countrycode(
sourcevar = GDPpercapita$code,
origin = "iso3c",
destination = "iso3c",
)
MilitaryExpenditurePercentGDP$code <- countrycode(
sourcevar = MilitaryExpenditurePercentGDP$code,
origin = "iso3c",
destination = "iso3c",
)
MiliratyExpenditurePercentGovExp$code <- countrycode(
sourcevar = MiliratyExpenditurePercentGovExp$code,
origin = "iso3c",
destination = "iso3c",
)
GDPpercapita <- GDPpercapita %>%
filter(code %in% list_country)
length(unique(GDPpercapita$code))
#> [1] 166
MilitaryExpenditurePercentGDP <- MilitaryExpenditurePercentGDP %>%
filter(code %in% list_country)
length(unique(MilitaryExpenditurePercentGDP$code))
#> [1] 166
MiliratyExpenditurePercentGovExp <- MiliratyExpenditurePercentGovExp %>%
filter(code %in% list_country)
length(unique(MiliratyExpenditurePercentGovExp$code))
#> [1] 166There were only 157 countries that were both in the main SDG dataset and in these 3 datasets, but we suspected that some of the missing countries were in the database but not rightly matched. Indeed, Bahamas was in the database but instead of the code “BHS” there was “The”, for “COD” it was “Dem. Rep.”, for “COG” it was “Rep”, etc. We remarked that the code is in another column of the initial database: “Indicator.Name”. We went back to the initial database and before cleaning it we put the right codes (as seen above) and after rerunning the code we see that we have all our 166 countries from the initial dataset.
Code
#### Missing countries ####
list_country_GDP <- c(unique(GDPpercapita$code))
setdiff(list_country, list_country_GDP)
#> character(0)Code
#### Pre-cleaned datasets on GDP per capita ####
D3_1_GDP_per_capita <- GDPpercapita
D3_2_Military_Expenditure_Percent_GDP <- MilitaryExpenditurePercentGDP
D3_3_Miliraty_Expenditure_Percent_Gov_Exp <- MiliratyExpenditurePercentGovExpHere are the first few lines of the cleaned dataset of GDP per capita:
For this dataset, we went from 266 observations (wide format) for 68 variables to 3818 (long format) observations for 4 variables.
Here are the first few lines of the cleaned dataset of military expenditures in percentage of GDP:
For this dataset, we went from 266 observations (wide format) for 68 variables to 3818 (long format) observations for 4 variables.
Here are the first few lines of the cleaned dataset of military expenditures in percentage of government expenditures:
For this dataset, we went from 266 observations (wide format) for 68 variables to 3818 (long format) observations for 4 variables.
2.3.1.4 Dataset on internet usage
To prepare the dataset on internet usage in the world to be merge with the other data, we first, import the data. Then, we keep only the year that we are interested in (2000 to 2022). We also rename the column and keep only the country that match the list of the countries in the main dataset on the SDG.
Code
#### Internet usage pre-cleaning ####
D4_0_Internet_usage <- read.csv(here("scripts", "data", "InternetUsage.csv")) %>%
filter(Year >= 2000, Year <= 2022) %>%
rename(
code = Code,
country = Entity,
year = Year,
internet_usage = Individuals.using.the.Internet....of.population.
) %>%
mutate(internet_usage = internet_usage / 100) %>%
filter(code %in% list_country) %>%
select(code, country, year, internet_usage)Here are the first few lines of the cleaned dataset of internet usage:
For this first dataset, we reduced the size from 6,570 observations across 4 variables to 3,433 observations for 4 variables.
2.3.1.5 Dataset on human freedom index
After importing the data from the CATO Institute website, we noticed that even if the file was called “Human Freedom Index 2022”, the available observations were only going from 2000 up to 2020. We have decided first to modify it in order to match our other datasets, by renaming/encoding/standardizing the columns containing the country names.
Code
#### Human Freedom Index pre-cleaning 1 ####
data <- read.csv(here("scripts", "data", "human-freedom-index-2022.csv"))
#data in tibble
datatibble <- tibble(data)
# Rename the column countries into country to match the other datbases
names(datatibble)[names(datatibble) == "countries"] <- "country"
# Make sure the encoding of the country names are UTF-8
datatibble$country <- iconv(datatibble$country, to = "UTF-8", sub = "byte")
# standardize country names
datatibble <- datatibble %>%
mutate(country = countrycode(country, "country.name", "country.name"))Once done, we could verify which countries were or were not present between these observations and our main SDG dataset. We have decided to keep the ones that were matching between the two datasets.
Code
#### Human Freedom Index pre-cleaning 2 ####
# Merge by country name
datatibble <- datatibble %>%
left_join(D1_0_SDG_country_list, by = "country")
datatibble <- datatibble %>% filter(code %in% list_country)
(length(unique(datatibble$code)))
#> [1] 159
# See which ones are missing
list_country_free <- c(unique(datatibble$code))
setdiff(list_country, list_country_free)
#> [1] "AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB"
# Turkey was missing but present in the initial database (it was a problem
# when standardizing the country names of D1_0SDG_country_list
#that we corrected) and the other missing countries are:
#"AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB"
D5_0_Human_freedom_index <- datatibbleThen, we noticed that there were a lot of columns that were not important for us, as we had 141 variables taken into account. So we have decided to keep the ones that refers to the countries informations (such as code, year, ..) and their human freedom scores per category (pf for personnal freedom, ef for economical freedom).
Code
#### Human Freedom Index pre-cleaning 3 ####
# Erasing useless columns to keep only the general ones.
D5_0_Human_freedom_index <- select(D5_0_Human_freedom_index, year, country,
region, hf_score, pf_rol, pf_ss,
pf_movement, pf_religion, pf_assembly,
pf_expression, pf_identity, pf_score,
ef_government, ef_legal, ef_money, ef_trade,
ef_regulation, ef_score, code)
D5_0_Human_freedom_index <- D5_0_Human_freedom_index %>%
rename(
pf_law = names(D5_0_Human_freedom_index)[5], # Renames the 5th column to "pf_law"
pf_security = names(D5_0_Human_freedom_index)[6] # Renames the 6th column to "pf_security"
)Here are the first few lines of the partialy cleaned dataset on Human Freedom Index scores:
For this first dataset, we reduced the size from 3’465 observations across 141 variables to 3339 observations for 4 variables.
2.3.1.6 Dataset on Disasters
For this dataset concerning the Disasters we imported the data from Kaggle as we couldn’t find the original dataset that is private coming from the EOSDIS SYSTEM, an interactive interface for browsing full-resolution, global, daily satellite images from NASA. Once we made sure that our file called “Disasters” was convert into a data frame, we selected some specific columns that we where interested in.
Code
#### Disasters pre-cleaning 1 ####
Disasters <- read.csv(here("scripts", "data", "Disasters.csv")) %>%
select(Year, Country, ISO, Location, Continent, Disaster.Subgroup,
Disaster.Type, Total.Deaths, No.Injured, No.Affected, No.Homeless,
Total.Affected, Total.Damages...000.US..)Because we knew that our file showed all the disasters in each country over the years (between 1970-2021) and that we wanted to focus on a specific period, we filtered our data to show the years between 2000 and 2022. Then we rearranged our data, changing the data types of all the columns and their names in order to match our other datasets.
Code
#### Disasters pre-cleaning 2 ####
# Rearrange the columns, changed the type of data, renamed the columns
Rearanged_Disasters <- Disasters %>%
filter(Year >= 2000 & Year <= 2022) %>%
mutate(
code = as.character(ISO),
country = as.character(Country),
year = as.integer(Year),
continent = as.character(Continent),
disaster.subgroup = as.character(Disaster.Subgroup),
disaster.type = as.character(Disaster.Type),
location = as.character(Location),
total.deaths = as.numeric(Total.Deaths),
no.injured = as.numeric(No.Injured),
no.affected = as.numeric(No.Affected),
no.homeless = as.numeric(No.Homeless),
total.affected = as.numeric(Total.Affected),
total.damages = as.numeric(Total.Damages...000.US..)
)We then grouped the data by “year”, “code”, “country” and “continent” and summarize the data. Here you can see that we re-selected specific columns as we saw that our first pre-selection was still too wide and some variables as the disaster.subgroup and disaster.type weren’t pertinent.We arranged the columns based on “code,” “country,” “year,” and “continent” to match the other datasets.
Code
#### Disasters pre-cleaning 3 ####
Disasters <- Rearanged_Disasters %>%
group_by(year,code, country, continent) %>%
summarize(
total_deaths = sum(total.deaths, na.rm = TRUE),
no_injured = sum(no.injured, na.rm = TRUE),
no_affected = sum(no.affected, na.rm = TRUE),
no_homeless = sum(no.homeless, na.rm = TRUE),
total_affected = sum(total.affected, na.rm = TRUE),
total_damages = sum(total.damages, na.rm = TRUE)
)
D6_0_Disasters <- Disasters %>%
select(code, country, year, continent, total_deaths, no_injured, no_affected,
no_homeless, total_affected, total_damages) %>%
arrange(code, country, year, continent)Finally we filtered our disasters data to keep only the countries that are present in our main dataset. We analysed the missing countries and identified three countries (BHR, BRN, MLT) that are unexpectedly missing.
Code
#### Disasters pre-cleaning 4 ####
D6_0_Disasters <- D6_0_Disasters %>% filter(code %in% list_country)
length(unique(D6_0_Disasters$code))
#> [1] 163
# Here we see which countries are missing
list_country_disasters <- c(unique(D6_0_Disasters$code))
setdiff(list_country, list_country_disasters)
#> [1] "BHR" "BRN" "MLT"Here are the first few lines of the cleaned dataset on Disasters:
2.3.1.7 Dataset on COVID
This dataset contains information on the COVID19 pandemic between 2020 and 2022. The observation are by year, month, day. After importing the database, we transform the date in format YYYY-MM-DD in order to only keep the year.
Code
#### COVID pre-cleaning 1 ####
COVID <- read.csv(here("scripts", "data", "COVID.csv")) %>%
select(iso_code, location, date, new_cases_per_million,
new_deaths_per_million, stringency_index) %>%
mutate(date = as.integer(year(date)))We perform a first round of investigation of the missing values before aggregating the values by year. We begin with the variables “cases per million” and “deaths per million”: seeing that for each country, we have either only missing values, either a very low percentage of missing values (~1%), we can compute the sum over each year and ignore the missing values without altering the data. Indeed, where all the values are missing, the computation will return a NA. We then look at the “stringency” variable and we have 3 scenarios:
~20% of missing values: we ignore missing values when computing the mean to have an idea of stringency each year (because we compute the mean stringency over the year, if some days are missing, it is not a problem, it can not evoluate that fast).
all are missing: we can ignore the missing values when computing the mean, because it will still return a missing value
almost all are missing: here the mean doesn’t make sense -> we will replace the values by NAs to be coherent. The countries with this issues are: ERI, GUM, PRI and VIR. We verify if they are in our main dataset and since none of these countries are, we can ignore the issue, the lines will be remove later anyway.
We aggregate the observations of all days of a year in one observation per country using the mean.
Code
#### COVID missing values ####
COVID1 <- COVID %>%
group_by(iso_code) %>%
summarize(NaDeaths = round(mean(is.na(new_deaths_per_million)),2),
NaCases = round(mean(is.na(new_cases_per_million)), 2),
NaStringency = round(mean(is.na(stringency_index)), 2)) %>%
pivot_longer(cols = starts_with("Na"),
names_to = "Variable",
values_to = "NaValue")%>%
filter(NaValue!=0)
ggplot(COVID1,
aes(x = as.factor(NaValue),
fill = Variable)) +
geom_bar(stat = "count",
position = position_dodge2(preserve = "single"),
width = 0.35) +
labs(title = "Patterns of NAs for COVID variables before cleaning",
x = "proportion of NAs",
y = "Count of countries") +
scale_fill_viridis_d(name = "Variables",
begin = 0.5,
end = 1,
direction = -1) +
theme(plot.title=element_text(hjust=0.5))+
theme_minimal()
issue_list <- c("ERI",
"GUM",
"PRI",
"VIR")
is.element(issue_list, list_country)
#> [1] FALSE FALSE FALSE FALSE
COVID <- COVID %>%
group_by(location, date) %>%
mutate(
cases_per_million = sum(new_cases_per_million, na.rm = TRUE),
deaths_per_million = sum(new_deaths_per_million, na.rm = TRUE),
stringency = mean(stringency_index, na.rm = TRUE)
)%>%
ungroup()Now that all the variables of interest are aggregated by year, we remove all the variables that we don’t need and rename all the remaining variables to match the main dataset.
Code
#### COVID renaming ####
COVID <- COVID %>%
group_by(location, date) %>%
distinct(date, .keep_all = TRUE) %>%
ungroup()
COVID <- COVID %>%
select(-c(new_cases_per_million, new_deaths_per_million, stringency_index))
colnames(COVID) <- c("code",
"country",
"year",
"cases_per_million",
"deaths_per_million",
"stringency")We remove the years that exceed 2022, we make sure that the country codes are all iso codes with 3 letters (we observe that sometimes they are preceded by “OWID_”) and we standardize the country codes.
Code
#### COVID years and code cleaning ####
COVID <- COVID[COVID$year <= 2022, ]
COVID$code <- gsub("OWID_", "", COVID$code)
COVID$code <- countrycode(
sourcevar = COVID$code,
origin = "iso3c",
destination = "iso3c"
)We remove the observations of countries that aren’t in our main dataset on SDGs and find that all the 166 countries that we have in the main SDG dataset are also in this one.
Code
#### COVID pre-cleaned dataset ####
D7_0_COVID <- COVID %>%
filter(code %in% list_country)
length(unique(COVID$code))
#> [1] 238Here are the first few lines of the cleaned dataset on COVID19:
2.3.1.8 Dataset on Conflicts
For our conflicts dataset, we imported the data from “The World Banck” data catalog. Once we made sure that our file called “Disasters” was convert into a data frame, we selected some specific columns that we where interested in.
Code
#### Conflicts dataset ####
Conflicts <- read.csv(here("scripts", "data", "Conflicts.csv")) %>%
as.data.frame() %>%
select(year, country, ongoing, gwsum_bestdeaths, pop_affected,
peaceyearshigh, area_affected, maxintensity, maxcumulativeintensity)Our file showed all the Conflicts and consequences per country over the years (between 2000-2016). We couldn’t find a better and more complete dataset, As we consider conflicts as events, we will only take into account results between 2000 and 2016. Then we rearranged our data, changing the data types of all the columns and their names in order to match our other datasets. We grouped the data by ” year”, “country”, re-selected some variables and summarize the data.
Code
#### Conflicts rearranging 1 ####
Rearanged_Conflicts <- Conflicts %>%
filter(year >= 2000 & year <= 2022)%>%
mutate(
ongoing = as.integer(ongoing),
country = as.character(country),
year = as.integer(year),
gwsum_bestdeaths = as.numeric(gwsum_bestdeaths),
pop_affected = as.numeric(pop_affected),
area_affected = as.numeric(area_affected),
maxintensity = as.numeric(maxintensity),
)
# Group the data by "year", "country" and summarize the data
Conflicts <- Rearanged_Conflicts %>%
group_by(year, country) %>%
summarize(
ongoing = sum (ongoing, na.rm = TRUE),
sum_deaths = sum(gwsum_bestdeaths, na.rm = TRUE),
pop_affected = sum(pop_affected, na.rm = TRUE),
area_affected = sum(area_affected, na.rm = TRUE),
maxintensity = sum(maxintensity, na.rm = TRUE),
)After we Selected specific columns from the summarized data and arrange the data by our specified columns. To make our dataset compatible with the main one and let the merging face succeed, we dd some adjustment concerning the country names’ to ensure the compatibility. Then we standardize and merge by country names to finally rearrange the data to retain only the countries present in our main dataset. Note that in the end we can see that only one country is missing that wasn’t in the initial conflicts database: BLR
Code
#### Conflicts rearranging 2 ####
conflicts <- Conflicts %>%
select(country, year, ongoing, sum_deaths,
pop_affected, area_affected, maxintensity) %>%
arrange(country, year)
conflicts$country <- iconv(conflicts$country, to = "UTF-8", sub = "byte")
conflicts <- conflicts %>%
mutate(country = countrycode(country, "country.name", "country.name"))
conflicts <- conflicts %>%
left_join(D1_0_SDG_country_list, by = "country")
conflicts <- conflicts %>%
select(code, country, year, ongoing, sum_deaths,
pop_affected, area_affected, maxintensity) %>%
arrange(code, country, year)
D8_0_Conflicts <- conflicts %>%
filter(code %in% list_country)
(length(unique(conflicts$code)))
#> [1] 166
# See which countries are missing
list_country_conflicts <- c(unique(conflicts$code))
setdiff(list_country, list_country_conflicts)
#> [1] "BLR"Here are the first few lines of the cleaned dataset on Conflicts:
2.3.1.9 Merging our dataset
By merging our eight pre-cleaned datasets, we create a final database.
Code
#### Pre-cleaned datasets merged ####
D2_1_Unemployment_rate$country <- NULL
merge_1_2 <- D1_0_SDG |> left_join(D2_1_Unemployment_rate,
join_by(code, year))
D3_1_GDP_per_capita$country <- NULL
merge_12_3 <- merge_1_2 |> left_join(D3_1_GDP_per_capita,
join_by(code, year))
D3_2_Military_Expenditure_Percent_GDP$country <- NULL
merge_12_3 <- merge_12_3 |> left_join(D3_2_Military_Expenditure_Percent_GDP,
join_by(code, year))
D3_3_Miliraty_Expenditure_Percent_Gov_Exp$country <- NULL
merge_12_3 <- merge_12_3 |> left_join(D3_3_Miliraty_Expenditure_Percent_Gov_Exp,
join_by(code, year))
D4_0_Internet_usage$country <- NULL
merge_123_4 <- merge_12_3 |> left_join(D4_0_Internet_usage,
join_by(code, year))
D5_0_Human_freedom_index$country <- NULL
merge_1234_5 <- merge_123_4 |> left_join(D5_0_Human_freedom_index,
join_by(code, year))
D6_0_Disasters$country <- NULL
merge_12345_6 <- merge_1234_5 |> left_join(D6_0_Disasters,
join_by(code, year))
D7_0_COVID$country <- NULL
D7_0_COVID <- D7_0_COVID |> distinct(code, year, .keep_all = TRUE)
merge_123456_7 <- merge_12345_6 |> left_join(D7_0_COVID,
join_by(code, year))
D8_0_Conflicts$country <- NULL
all_Merge <- merge_123456_7 |> left_join(D8_0_Conflicts,
join_by(code, year)) 2.3.2 Cleaning of the final database
2.3.2.1 Filing missing continents and regions colomns
When we merged our dataset, we noticed that some countries were not assigned their corresponding continents and/or region. This issue arose because we sourced the continent and region data from secondary databases, not from our main one. We now add this the corresponding missing continents and regions.
Code
#### Filling missing continents and regions ####
# Update all_Merge with region and continent information
all_Merge <- all_Merge %>%
group_by(country) %>%
mutate(
continent = ifelse(is.na(continent), first(na.omit(continent)), continent),
region = ifelse(is.na(region), first(na.omit(region)), region)
) %>%
ungroup() %>%
mutate(continent = case_when(
code %in% c("BHR") ~ "Asia",
code %in% c("BRN") ~ "Asia",
code %in% c("MLT") ~ "Europe",
TRUE ~ continent
),
region = case_when(
code %in% c("AFG", "MDV") ~ "South Asia",
code %in% c("CUB") ~ "Latin America & the Caribbean",
code %in% c("STP", "SSD") ~ "Sub-Saharan Africa",
code %in% c("TKM", "UZB") ~ "Caucasus & Central Asia",
TRUE ~ region))We order the database, beginning by the information on the country, the year, the continent and the region.
Code
#### Ordering the database and saving it as .CSV ####
all_Merge <- as.data.frame(all_Merge) %>%
select(code, year, country, continent, region, everything())
write.csv(all_Merge, file = here("scripts","data","all_Merge.csv"))Here are the first few lines of the final dataset:
Final structure of our merged database: each country of the 166 countries from D1_1_SDG are observed each year from 2000 to 2022, thus each row has a key composed of (code, year) that uniquely identifies an observation. The other columns are the variables listed above. Due to some countries having a lot of missing information we will have to eliminate some of them, but we will still have more than 2000 rows in our database.
2.3.3 Treatment of missing values
We load our final database and we visualize the missing values.We see that some variables have many NAs and that some patterns regarding row missingness emerge.
Code
#### Loading the final database to be cleaned ####
# Import the final database
all_Merge <- read.csv(here("scripts","data","all_Merge.csv"))
# Remove unnecessary column
all_Merge <- all_Merge %>%
select(-c(X))
# Create a dataframe with the goals without NAs summarize in one column to
# simplify the visualization
goal_vars <- all_Merge %>%
select(starts_with("goal")) %>%
filter_all(all_vars(!is.na(.))) %>%
colnames()
to_plot_missing <- all_Merge %>%
mutate(Goals_without_NAs = rowSums(!is.na(select(., all_of(goal_vars))))) %>%
select(-c(goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9,
goal11, goal12, goal13, goal15, goal16, goal17))
vis_dat(to_plot_missing, warn_large_data = FALSE) +
scale_fill_viridis_d(na.value = "grey99",
begin = 0.4,
end = 0.9) +
theme(
axis.text.x = element_text(angle = 90, size = 6),
legend.text = element_text(size = 8), # Adjust the size of legend text
legend.title = element_text(size = 10)
)For each of our research question, we will start with the merged data set and deal with the missing value separately, because there are often NAs for the same row across many columns that will be used for the same question. This will allow us to not delete observations when we do not need to.
For question 1, we only keep the years until 2020, because most of the explanatory variables that we want to use (those coming from the human freedom index) only have values until 2020.
Code
#### Cleaning the database for question 1 ####
data_question1 <- all_Merge %>%
filter(year<=2020) %>%
select(-c(total_deaths, no_injured, no_affected, no_homeless, total_affected,
total_damages, cases_per_million, deaths_per_million, stringency,
ongoing, sum_deaths, pop_affected, area_affected, maxintensity))For question 2 and 4, we use the main data from the SDG database.
Code
#### Cleaning the database for question 2 and 4 ####
data_question24 <- all_Merge %>%
select(c(code, year, country, continent, region, overallscore, goal1, goal2,
goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11,
goal12, goal13, goal15, goal16, goal17))For question 3, we create 3 distinct databases according to the different type of event that we will analyse: disasters, COVID19 and conflicts. For the disasters, we only keep the years until 2021, because after this date, we don’t have data, moreover we decided to delete the country Bahrain, Brunei and Malta as we do not have any data concerning them. For the conflicts, we only keep the years until 2016, because after this date, we don’t have data. Concerning the conflict dataset, we decided to erase Belarus because once again we do not have any data concerning this country.
Code
#### Cleaning the database for question 3 ####
# Disasters
data_question3_1 <- all_Merge %>%
filter(year<=2021 & code!="BHR" & code!="BRN" & code!="MLT") %>%
select(c(code, year, country, continent, region, overallscore, goal1, goal2,
goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11,
goal12, goal13, goal15, goal16, goal7, total_deaths, no_injured,
no_affected, no_homeless, total_affected, total_damages))
# COVID
data_question3_2 <- all_Merge %>%
select(c(code, year, country, continent, region, overallscore, goal1, goal2,
goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11,
goal12, goal13, goal15, goal16, goal7, cases_per_million,
deaths_per_million, stringency))
# Conflicts
data_question3_3 <- all_Merge %>%
filter(year<=2016 & code !="BLR") %>%
select(c(code, year, country, continent, region, overallscore, goal1, goal2,
goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11,
goal12, goal13, goal15, goal16, goal7, ongoing, sum_deaths,
pop_affected, area_affected, maxintensity))Data for question 1
Dealing with missing values in colomns
We begin by visualizing the missing values. To have a less messy graph we group all the goals wihtout NAs into one single variable. We decide to remove MilitaryExpenditurePercentGovExp, because it has too many missing values and it contains similar information to MilitaryExpenditurePercentGDP.We also remove hf_score, pf_score and ef_score, because there are many missing values and since these variables summarize the other ones, deleting them will not make us loose information.
Code
#### Visualizing missing values by variables ####
# Create a dataframe with the goals without NAs summarize in one column to simplify the visualization
variable_names <- names(data_question1)
missing_percentages <-
sapply(data_question1, function(col) mean(is.na(col)) * 100)
missing_data_summary <- data.frame(
Variable = variable_names,
Missing_Percentage = missing_percentages
)
missing_data_summary <- missing_data_summary %>%
mutate(VariableGroup = ifelse(startsWith(Variable, "goal") & Missing_Percentage == 0, "Goals without NAs", as.character(Variable)))
# Add a column with the number of missing values for the hover function
missing_data_summary$hover_text <- paste("Variable: ",
missing_data_summary$Variable,
"\nMissing percentage: ",
round(missing_data_summary$Missing_Percentage, 2),
"%", sep = "")
p <- ggplot(data = missing_data_summary,
aes(x = reorder(VariableGroup, Missing_Percentage),
y = Missing_Percentage,
fill = Missing_Percentage,
text = hover_text)) +
geom_bar(stat = "identity") +
scale_fill_gradientn(colors = MPer_pal(100)) +
labs(title = "Percentage of Missing Values by Variable",
x = "Variable",
y = "Missing Percentage") +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1,
size=6),
axis.text.x = element_text(angle = 45,
hjust = 1,
size=8),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10),
plot.title=element_text(hjust=0.5)) +
guides(fill = FALSE)
# Convert to plotly and specify what to display when hovering the graph
ggplotly(p,
tooltip = "text") %>%
config(plot_ly,
displayModeBar = FALSE)Code
data_question1 <- data_question1 %>%
select(-c(MiliratyExpenditurePercentGovExp, hf_score, pf_score, ef_score))Dealing with missing values in rows
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. We decide to remove the countries that have more than 50 missing values.
Code
#### Columns with number of missing values ####
see_missing1_1 <- data_question1 %>%
group_by(code) %>%
summarise(across(-c(year, country, continent, region, population,
overallscore, goal1, goal2, goal3, goal4, goal5, goal6,
goal7, goal8, goal9, goal10, goal11, goal12, goal13,
goal15, goal16, goal17),
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 50))
data_question1 <- data_question1 %>% filter(!code %in% see_missing1_1$code)Here is the graph that allows us to visualize the countries that have missing values and how many , when there are more than 50 NAs in total.
Code
#### Number of missing values per country (>50 NAs) ####
# Add a column with the number of missing values for the hover function
see_missing1_1$hover_text <- paste("Country: ", see_missing1_1$code,
"\nMissing values: ", see_missing1_1$num_missing)
# Creation of the plot with the number of missing values per country
p <- ggplot(see_missing1_1, aes(x = num_missing,
y = reorder(code, num_missing),
text = hover_text)) +
geom_bar(stat = "identity", fill = Fix_color) +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1, size=8),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10),
plot.title = element_text(hjust=0.5, size=12)) +
labs(title = "Number of missing values per country containing at least 50 NAs",
x = "Number of Missing Values",
y = "Countries") +
guides(fill = FALSE) # Remove color legend
# Convert to plotly and specify what to display when hovering over the graph
p_plotly <- ggplotly(p, tooltip = "text") %>%
config(displayModeBar = FALSE)
print(p_plotly)We also look at patterns of missing values in the rows and see that except for the two goals with NAs that we discussed earlier and for the triplet “ef_money”, “ef_trade” and “ef_regulation” there are not well defined patterns. We removes the countries that have NAs in the three variables mentioned at the same time.
Code
#### Visualizing the missing values in the rows ####
gg_miss_upset(data_question1,
nsets=10,
nintersects=11)
data_question1 <-
data_question1[rowSums(is.na(data_question1[, c("ef_money",
"ef_trade",
"ef_regulation")])) < 3, ]
data_question1 <- data_question1 %>%
group_by(code) %>%
filter(all(2000:2020 %in% year)) %>%
ungroup()GDP per capita
Only Venezuela has missing values that we can not fill (because the evolution over time is not linear), so we delete the country.
Code
#### Deletion of Venezuela ####
question1_missing_GDP <- data_question1 %>%
group_by(code) %>%
summarize(NaGDPpercapita = mean(is.na(GDPpercapita)))%>%
filter(NaGDPpercapita != 0)
data_question1 <- data_question1 %>% filter(code!="VEN")Military expenditure in % of GDP
For MilitaryExpenditurePercentGDP, We plot the evolution of MilitaryExpenditurePercentGDP along the years for each country containing missing values and distinguish the percentage of missing values with colors.
Code
#### Evolution of MilitaryExpenditurePercentGDP over the time ####
MilitaryExpenditurePercentGDP1 <- data_question1 %>%
group_by(code) %>%
summarize(NaMil1 = round(mean(is.na(MilitaryExpenditurePercentGDP)),3)) %>%
filter(NaMil1 != 0)
filtered_data_Mil1 <- MilitaryExpenditurePercentGDP %>%
filter(code %in% MilitaryExpenditurePercentGDP1$code) # countries with NAs
filtered_data_Mil1 <- filtered_data_Mil1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(MilitaryExpenditurePercentGDP))) %>% # Column % NAs
ungroup()
Evol_Missing_Mil1 <- ggplot(data = filtered_data_Mil1) +
geom_line(aes(x = year,
y = MilitaryExpenditurePercentGDP,
color = cut(PercentageMissing,
breaks = c(0,
0.1,
0.2,
0.3,
1),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%")))) +
labs(title = "Military expenditure in % of GDP over time",
x = "Year",
y = "Military expenditure in % of GDP") +
scale_color_manual(values = c("0-10%" = MPer_0_10,
"10-20%" = MPer_10_20,
"20-30%"= MPer_20_30,
"30-100%" = MPer_30_100),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%")) +
guides(color = guide_legend(title = "% NAs")) +
facet_wrap(~ code, nrow = 5) +
theme(strip.text = element_text(size = 6),
axis.text.x = element_text(angle = 45, size= 6),
plot.title=element_text(hjust=0.5)) +
scale_y_continuous(breaks = NULL)
print(Evol_Missing_Mil1)We delete the countries with more than 30% of missing values and for the countries with less than 30% of missing values and a linear evolution in time, we fill the missing values using linear interpolation.
Code
#### Deletion of countries with (>30% NAs) ####
data_question1 <- data_question1 %>% filter(code!="ARE" &
code!="BHS" &
code!="BRB" &
code!="CRI" &
code!="HTI" &
code!="ISL" &
code!="PAN" &
code!="SYR" &
code!="VNM")
list_code <- c("BDI", "BEN", "CAF", "CIV", "COD",
"GAB", "NER", "TGO", "TTO", "ZMB")
for (i in list_code) {
country_data <- data_question1 %>%
filter(code == i)
interpolated_data <- na.interp(country_data$MilitaryExpenditurePercentGDP)
data_question1[data_question1$code == i, "MilitaryExpenditurePercentGDP"] <- interpolated_data
}Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the remaining missing values, where there are less than 30% missing using the median by region.
Code
#### Distribution of the variable per region ####
question1_missing_Military <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(MilitaryExpenditurePercentGDP))) %>% # Column % NAs
ungroup() %>%
group_by(region) %>%
filter(sum(PercentageMissing, na.rm = TRUE) > 0)
Freq_Missing_Military <- ggplot(data = question1_missing_Military) +
geom_histogram(aes(x = MilitaryExpenditurePercentGDP,
fill = cut(PercentageMissing,
breaks = c(0,
0.1,
0.2,
0.3,
1),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%"))),
bins = 30) +
labs(title = "Distribution of Military expenditures in % of GDP",
x = "Military expenditures in % of GDP",
y = "Frequency") +
scale_fill_manual(values = c("0-10%" = MPer_0_10,
"10-20%" = MPer_10_20,
"20-30%" = MPer_20_30,
"30-100%" = MPer_30_100),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%")) +
guides(fill = guide_legend(title = "% NAs")) +
theme(plot.title=element_text(hjust=0.5))+
facet_wrap(~ region, nrow = 1)
print(Freq_Missing_Military)
data_question1 <- data_question1 %>%
group_by(code) %>%
mutate(
PercentageMissingByCode = mean(is.na(MilitaryExpenditurePercentGDP))
) %>%
ungroup() %>%
group_by(region) %>%
mutate(
MedianByRegion = median(MilitaryExpenditurePercentGDP, na.rm = TRUE),
MilitaryExpenditurePercentGDP = ifelse(
PercentageMissingByCode < 0.3 & !is.na(MilitaryExpenditurePercentGDP),
MilitaryExpenditurePercentGDP,
ifelse(PercentageMissingByCode < 0.3, MedianByRegion, MilitaryExpenditurePercentGDP)
)
) %>%
select(-PercentageMissingByCode, -MedianByRegion)Internet usage
There are only low percentage of missing values.
Code
#### Percentage of missing values ####
question1_missing_Internet <- data_question1 %>%
group_by(code) %>%
summarize(NaInternet = mean(is.na(internet_usage)))%>%
filter(NaInternet != 0)There are never more than 30% of NAs. We look at the evolution of the variable over time. We fill the missing values with linear interpolation, because all are increasing in time and they are almost straight lines, except for CIV that we delete.
Code
#### Evolution of the variable over time ####
question1_missing_Internet <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(internet_usage))) %>% # Column % NAs
filter(code %in% question1_missing_Internet$code)
Evol_Missing_Internet <- ggplot(data = question1_missing_Internet) +
geom_line(aes(x = year,
y = internet_usage,
color = cut(PercentageMissing,
breaks = c(0,
0.1,
0.2,
0.3,
1),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%")))) +
labs(title = "Evolution of internet usage over time",
x = "Year",
y = "Internet usage in %") +
scale_color_manual(values = c("0-10%" = MPer_0_10,
"10-20%" = MPer_10_20,
"20-30%" = MPer_20_30,
"30-100%" = MPer_30_100),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%")) +
guides(color = guide_legend(title = "% NAs")) +
theme(axis.text.x = element_text(angle = 45, size= 6),
axis.text.y = element_text(size= 6),
plot.title=element_text(hjust=0.5))+
facet_wrap(~ code, nrow = 4)
print(Evol_Missing_Internet)
list_code <- setdiff(unique(question1_missing_Internet$code), "CIV")
for (i in list_code) {
country_data <- data_question1 %>%
filter(code == i)
interpolated_data <- na.interp(country_data$internet_usage)
data_question1[data_question1$code == i, "internet_usage"] <- interpolated_data
}
data_question1 <- data_question1 %>%
filter(code!="CIV")Human freedom index
Personal freedom: law
The variable pf_law has (many) NAs, but only for one country: BLZ, so we decide to remove it.
Code
#### pf_law has NAs only for BLZ ####
data_question1 <- data_question1 %>%
filter(code!="BLZ")Economic freedom: government
There are no more missing values, thanks to the previous steps.
Economic freedom: money
5 countries have missing values, but the percentage of missing values is always below 25%.
Code
#### Missing values in 5 countries (<25%) ####
question1_missing_ef_money <- data_question1 %>%
group_by(code) %>%
summarize(Na_ef_money = mean(is.na(ef_money))) %>%
filter(Na_ef_money != 0)We look at the evolution of the variable over time, and for the countries with a linear evolution in time, we fill the missing values using linear interpolation.
Code
#### Evolution of economic freedom: money over time ####
question1_missing_ef_money <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_money))) %>% # Column % NAs
filter(code %in% question1_missing_ef_money$code)
Evol_Missing_ef_money <- ggplot(data = question1_missing_ef_money) +
geom_line(aes(x = year,
y = ef_money,
color = cut(PercentageMissing,
breaks = c(0,
0.1,
0.2,
0.3,
1),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%")))) +
labs(title = "Evolution of economic freedom: money over time",
x = "Year",
y = "ef_money") +
scale_color_manual(values = c("0-10%" = MPer_0_10,
"10-20%" = MPer_10_20,
"20-30%" = MPer_20_30,
"30-100%" = MPer_30_100),
labels = c("0-10%",
"10-20%",
"20-30%",
"50-100%")) +
guides(color = guide_legend(title = "% NAs")) +
theme(axis.text.x = element_text(angle = 45, size= 6),
plot.title=element_text(hjust=0.5))+
facet_wrap(~ code, nrow = 1) +
scale_y_continuous(limits = c(0, 10))
print(Evol_Missing_ef_money)
list_code <- c("GEO",
"MKD")
for (i in list_code) {
country_data <- data_question1 %>%
filter(code == i)
interpolated_data <- na.interp(country_data$ef_money)
data_question1[data_question1$code == i, "ef_money"] <- interpolated_data
}Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values using the median by region.
Code
#### Evolution of economic freedom: money ####
question1_missing_ef_money <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_money))) %>% # Column % NAs
ungroup() %>%
group_by(region) %>%
filter(sum(PercentageMissing, na.rm = TRUE) > 0)
Freq_Missing_ef_money <- ggplot(data = question1_missing_ef_money) +
geom_histogram(aes(x = ef_money,
fill = cut(PercentageMissing,
breaks = c(0,
0.1,
0.2,
0.3,
1),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%"))),
bins = 30) +
labs(title = "Distribution of economic freedom: money",
x = "ef_money",
y = "Frequency") +
scale_fill_manual(values = c("0-10%" = MPer_0_10,
"10-20%" = MPer_10_20,
"20-30%"= MPer_20_30,
"30-100%" = MPer_30_100),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%")) +
guides(fill = guide_legend(title = "% NAs")) +
theme(plot.title=element_text(hjust=0.5))+
facet_wrap(~ region, nrow = 1)
print(Freq_Missing_ef_money)
data_question1 <- data_question1 %>%
group_by(code) %>%
mutate(
PercentageMissingByCode = mean(is.na(ef_money))
) %>%
ungroup() %>%
group_by(region) %>%
mutate(
MedianByRegion = median(ef_money, na.rm = TRUE),
ef_money = ifelse(
PercentageMissingByCode < 0.3 & !is.na(ef_money),
ef_money,
ifelse(PercentageMissingByCode < 0.3, MedianByRegion, ef_money)
)) %>%
select(-PercentageMissingByCode, -MedianByRegion)Economic freedom: trade
6 countries have missing values, but the percentage of missing values is always below 25%.
Code
#### Missing values in 6 countries (<25%) ####
question1_missing_ef_trade <- data_question1 %>%
group_by(code) %>%
summarize(Na_ef_trade = mean(is.na(ef_trade))) %>% # Column % NAs
filter(Na_ef_trade != 0)We look at the evolution of the variable over time. For the countries where this evolution is linear, we fill in the missing values using linear interpolation.
Code
#### Evolution of economic freedom: trade over time ####
question1_missing_ef_trade <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_trade))) %>% # Column % NAs
filter(code %in% question1_missing_ef_trade$code)
Evol_Missing_ef_trade <- ggplot(data = question1_missing_ef_trade) +
geom_line(aes(x = year,
y = ef_trade,
color = cut(PercentageMissing,
breaks = c(0,
0.1,
0.2,
0.3,
1),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%")))) +
labs(title = "Evolution of economic freedom: trade over time",
x = "Year",
y = "ef_trade") +
scale_color_manual(values = c("0-10%" = MPer_0_10,
"10-20%" = MPer_10_20,
"20-30%" = MPer_20_30,
"30-100%" = MPer_30_100),
labels = c("0-10%",
"10-20%",
"20-30%",
"50-100%")) +
guides(color = guide_legend(title = "% NAs")) +
theme(axis.text.x = element_text(angle = 45, size= 6),
plot.title=element_text(hjust=0.5))+
facet_wrap(~ code, nrow = 2) +
scale_y_continuous(limits = c(0, 10))
print(Evol_Missing_ef_trade)
# Linear interpolation for "AZE", "BFA", "ETH", "GEO", "VNH"
list_code <- c("AZE",
"GEO",
"MKD",
"MNG")
for (i in list_code) {
country_data <- data_question1 %>% filter(code == i)
interpolated_data <- na.interp(country_data$ef_trade)
data_question1[data_question1$code == i, "ef_trade"] <- interpolated_data
}Then, we look at the distribution of the variable per region. Seeing that the only region that still has missing values is a centered distribution, we decide to replace the missing values using the mean of the region.
Code
#### Distribution of ef_trade missing values ####
question1_missing_ef_trade <- data_question1 %>%
group_by(code) %>%
mutate(PercentageMissing = mean(is.na(ef_trade))) %>% # Column % NAs
ungroup() %>%
group_by(region) %>%
filter(sum(PercentageMissing, na.rm = TRUE) > 0)
Freq_Missing_ef_trade <- ggplot(data = question1_missing_ef_trade) +
geom_histogram(aes(x = ef_trade,
fill = cut(PercentageMissing,
breaks = c(0,
0.1,
0.2,
0.3,
1),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%"))),
bins = 30) +
labs(title = "Distribution of economic freedom: trade",
x = "ef_trade",
y = "Frequency") +
scale_fill_manual(values = c("0-10%" = MPer_0_10,
"10-20%" = MPer_10_20,
"20-30%"= MPer_20_30,
"30-100%" = MPer_30_100),
labels = c("0-10%",
"10-20%",
"20-30%",
"30-100%")) +
guides(fill = guide_legend(title = "% NAs")) +
theme(plot.title=element_text(hjust=0.5))+
facet_wrap(~ region, nrow = 2)
print(Freq_Missing_ef_trade)
data_question1 <- data_question1 %>%
group_by(code) %>%
mutate(
PercentageMissingByCode = mean(is.na(ef_trade))
) %>%
ungroup() %>%
group_by(region) %>%
mutate(
MeanByRegion = mean(ef_trade, na.rm = TRUE),
ef_trade = ifelse(
PercentageMissingByCode < 0.3 & !is.na(ef_trade),
ef_trade,
ifelse(PercentageMissingByCode < 0.3, MeanByRegion, ef_trade)
)) %>%
select(-PercentageMissingByCode, -MeanByRegion)Economic freedom: regulation
There are no more missing values, thanks to the previous steps.
SDGs 1 and 10
Regarding the missing values in the goals, we noticed earlier that there were only missing values for goals 1 and 10. We will now investigate to find where these missing values are located. First, let’s look at the missing values for goal 1.
Code
#### Goal1 missing values ####
# Counting the missing values
na_count_1 <- sapply(data_question1, function(x) sum(is.na(x)))
na_count_df_1 <- data.frame(variable = names(na_count_1),
num_missing = na_count_1)
na_count_df_filtered_1 <- subset(na_count_df_1,
num_missing > 0)
# Remove missing values for goal 1
question1_missing_goal1 <- data_question1 %>%
group_by(code) %>%
summarize(Na_goal1 = mean(is.na(goal1))) %>%
filter(Na_goal1 != 0)
data_question1 <- data_question1 %>%
filter(!code %in% question1_missing_goal1$code)
# Counting the missing values again
na_count_10 <- sapply(data_question1, function(x) sum(is.na(x)))
na_count_df_10 <- data.frame(variable = names(na_count_10),
num_missing = na_count_10)
na_count_df_filtered_10 <- subset(na_count_df_10,
num_missing > 0)We found that there is 126 missing values for goal 1. The missing values were located in only 5 countries. So we have decided to get rid of them. At this stage, there is only 42 remaining missing values for goal 10. Therefore, we also get rid of the observations with missing values for goal 10.
Code
#### Goal10 missing values ####
# Remove missing values for goal 10
question1_missing_goal10 <- data_question1 %>%
group_by(code) %>%
summarize(Na_goal10 = mean(is.na(goal10))) %>%
filter(Na_goal10 != 0)
data_question1 <- data_question1 %>%
filter(!code %in% question1_missing_goal10$code)Our dataset is now completely clean and ready to be used for our question 1.
Data for question 2 and 4
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.
Code
#### Missing values columns ####
see_missing24 <- data_question24 %>%
group_by(code) %>%
summarise(across(everything(), ~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))
#> `summarise()` has grouped output by 'code'. You can override using
#> the `.groups` argument.
data_question24 <- data_question24 %>%
group_by(country) %>%
filter(!all(is.na(goal1)) & !all(is.na(goal10)))Data for question 3
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.
Disasters
We begin by visualizing the missing values.
Code
#### Visualizing missing values ####
variable_names <- names(data_question3_1)
missing_percentages <- sapply(data_question3_1, function(col) mean(is.na(col)) * 100)
missing_data_summary <- data.frame(
Variable = variable_names,
Missing_Percentage = missing_percentages
)
missing_data_summary <- missing_data_summary %>%
mutate(VariableGroup = ifelse(startsWith(Variable, "goal") & Missing_Percentage == 0, "Goals without NAs", as.character(Variable)))
# Add a column with the number of missing values for the hover function
missing_data_summary$hover_text <- paste("Variable: ",
missing_data_summary$VariableGroup,
"\nMissing Percentage: ",
round(missing_data_summary$Missing_Percentage, 2),
"%",
sep = "")
p <- ggplot(missing_data_summary,
aes(x = reorder(VariableGroup, Missing_Percentage),
y = Missing_Percentage,
text = hover_text)) +
geom_bar(stat = "identity",
fill = Fix_color) +
labs(title = "Percentage of Missing Values by Variable",
x = "Variable",
y = "Missing Percentage") +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 1),
plot.title=element_text(hjust=0.5)) +
coord_flip() +
guides(fill = FALSE)
# Convert to plotly and specify what to display when hovering over the graph
p_plotly <- ggplotly(p, tooltip = "text") %>%
config(displayModeBar = FALSE)
p_plotlyIn this particular case, even if there are many missing values in our disaster dataset, we made the hypothesis that disaster events can not happen every year for every country given that these are uncontrollable and non-recurring events. Therefore the NAs that we encounter will become zeroes, implying that there have been no climatic disasters.
Code
#### Replacing NAs by 0 ####
data_question3_1[is.na(data_question3_1)] <- 0COVID19
We look at the missing values for the three variables that are specific to COVID during the years of COVID: 2020 to 2022. We delete the countries that have NAs (only stringency has 6 countries with 100% NAs).
Code
#### COVID19 Missing values graphs ####
COVID4 <- data_question3_2 %>%
filter(year >= 2020 & year <= 2022) %>%
group_by(code) %>%
summarize(Na_deaths = mean(is.na(deaths_per_million)),
Na_cases = mean(is.na(cases_per_million)),
Na_stringency = mean(is.na(stringency))) %>%
filter(Na_deaths != 0 | Na_cases!=0 | Na_stringency !=0)
g1 <- ggplot(COVID4, aes(x = reorder(code, Na_deaths), y = Na_deaths)) +
geom_bar(stat = "identity",
fill = MPer_30_100,
color = "black") +
labs(title = "NAs by country: \ndeaths per million",
x = "Country code",
y = "% NAs") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1, size=6),
plot.title=element_text(size=10, hjust=0.5),
axis.text.y = element_text(size= 6),
axis.title.x = element_text(size= 8),
axis.title.y = element_text(size= 8)) +
scale_y_continuous(limits = c(0, 1))
g2 <- ggplot(COVID4,
aes(x = reorder(code, Na_cases),
y = Na_cases)) +
geom_bar(stat = "identity",
fill = MPer_30_100,
color = "black") +
labs(title = "NAs by country: \ncases per million",
x = "Country code",
y = "% NAs") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1, size=6),
plot.title=element_text(size=10, hjust=0.5),
axis.text.y = element_text(size= 6),
axis.title.x = element_text(size= 8),
axis.title.y = element_text(size= 8)) +
scale_y_continuous(limits = c(0, 1))
g3 <- ggplot(COVID4,
aes(x = reorder(code, Na_stringency),
y = Na_stringency)) +
geom_bar(stat = "identity",
fill = Fix_color,
color = "black") +
labs(title = "NAs by country: \nstringency",
x = "Country code",
y = "% NAs") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1, size=6),
plot.title=element_text(size=10, hjust=0.5),
axis.text.y = element_text(size= 6),
axis.title.x = element_text(size= 8),
axis.title.y = element_text(size= 8))
g1 + g2 + g3
data_question3_2 <- data_question3_2 %>%
filter(!code %in% COVID4$code)We replace the NAs of the other COVID columns (years 2000 t0 2019) by 0 (because we don’t have real missing, only introduced by merging with the other databases).
Code
#### Replacing NAs by 0 ####
data_question3_2 <- data_question3_2 %>%
mutate(
cases_per_million = ifelse(is.na(cases_per_million), 0, cases_per_million),
deaths_per_million = ifelse(is.na(deaths_per_million), 0, deaths_per_million),
stringency = ifelse(is.na(stringency), 0, stringency)
)Conflicts
We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed.Two countries have missing values, we remove them (MNE and SRB).
Code
#### Removing countries because of missing values ####
see_missing3_3 <- data_question3_3 %>%
group_by(code) %>%
summarise(across(-c(goal1, goal10), # Exclude columns "goal1" and "goal10"
~ sum(is.na(.))) %>%
mutate(num_missing = rowSums(across(everything()))) %>%
filter(num_missing > 0))
data_question3_3 <- data_question3_3 %>% filter(!code %in% c("MNE",
"SRB",
"SSD"))Code
#### EXPORT as CSV ####
write.csv(data_question1, file = here("scripts","data","data_question1.csv"))
write.csv(data_question24, file = here("scripts","data","data_question24.csv"))
write.csv(data_question3_1, file = here("scripts","data","data_question3_1.csv"))
write.csv(data_question3_2, file = here("scripts","data","data_question3_2.csv"))
write.csv(data_question3_3, file = here("scripts","data","data_question3_3.csv"))3 EDA and Analysis of the data
3.1 Focus on the influence of the factors over the SDG scores
3.1.1 EDA: general exploratory data analysis
For this first part of our EDA, let’s try to check first the distribution of the SDG. The following variables are ordered by decreasing order of the average of the scores. The color represent the density of the observations: more the color is bright, more the density is high.
Code
#### Reshape then plot the distribution of the SDG scores ####
# Reshape the data from wide to long format for our SDG and our human freedom index scores
long_df_goal_distribution <- pivot_longer(Correlation_overall,
cols = starts_with("goal"),
names_to = "Goal",
values_to = "Value")
long_df_goal_distribution$Goal <- with(long_df_goal_distribution,
reorder(Goal, Value, FUN = mean))
long_df_hfi_distribution <- pivot_longer(Correlation_overall,
cols = pf_law:ef_regulation,
names_to = "Category",
values_to = "Value")
long_df_hfi_distribution$Goal <- with(long_df_hfi_distribution,
reorder(Category, Value, FUN = mean))
# Plot the distribution of the SDG scores
ggplot(long_df_goal_distribution,
aes(x = Value,
y = Goal,
fill = ..density..)) +
geom_density_ridges_gradient(scale = 3,
size = 0.3,
rel_min_height = 0.01) +
scale_fill_viridis_c(option = "D") +
theme(plot.title = element_text(hjust = 0.5)) +
labs(x = 'Score',
y = 'Goals',
title = 'SDG Scores Distribution')As we can see, most of our goals have a left-skewed distribution, which shows that for most of the country concerned implemented good strategies for targeting the goals objectives. Some distributions have a longer distribution than other, showing the variance of the scores between the countries, due to various reasons such as the inequality of the amount of money available to be invested for implementing solutions. In addition, we notice that the only right-skewed distribution is concerning the observations of the goal 9, which is promoting infrastructures, innovation and inclusive and sustainable industrialization.Therefore, it seems like many countries are encountering difficulties for implementing good solutions concerning this goal. Furthermore, we notice that the goal1 has the highest density, which means that for this SDG,most of the countries received the same scores.
Now let’s focus on the distribution of the Human Freedom Index scores. Same ordering selection and color method as the previous distribution plot.
Code
#### Human Freedom Index Scores Distribution plot ####
ggplot(long_df_hfi_distribution, aes(x = Value,
y = Category,
fill = ..density..)) +
geom_density_ridges_gradient(scale = 3,
size = 0.3,
rel_min_height = 0.01) +
scale_fill_viridis_c(name = "density",
option = "D") +
theme(plot.title = element_text(hjust = 0.5),
plot.title.position = "plot") +
labs(x = 'Scores',
title = 'Human Freedom Index Scores Distribution')The distribution of the Human Freedom Index Score follows the same trend as the SDG goal one. Most of the scores are left-skewed, which means that countries tend to have in general good scores. The only scores not following the trend are pf_law and ef_legal, which tend to have in average lower scores. There could be multipl reasons why it is the case. One could be , i.e., legal system for civilians and countries is evolving slowly because it has a lot of implications over the situation within a country/between countries and because of the divergence of opinions. Therefore, investing money for raising these variable could be more difficult than raising the scores of other goals. Furthermore, the highest density is contained in the factor pf_movement, which mean that most of our observations received the same high score.
Now let’s consider the remaining variables of the dataset dedicated to answering the influence of factors over our SDG scores.
Code
#### Distribution of the remaining variables ####
# Unemployment rate plot
unempl_d <- ggplot(Correlation_overall,
aes(x = unemployment.rate,
y = 1,
fill = ..density..)) +
geom_density_ridges_gradient(scale = 3,
size = 0.3,
rel_min_height = 0.01) +
scale_fill_viridis_c(name = "",
option = "D") +
theme(plot.title = element_text(hjust = 0.5,
size = 10),
plot.title.position = "plot") +
labs(y = 'Density',
x = 'Percentage',
title = 'Distribution of Unemployment Rate')
# GDP per capita plot
gdp_d <- ggplot(Correlation_overall,
aes(x = GDPpercapita,
y = 1,
fill = ..density..)) +
geom_density_ridges_gradient(scale = 3,
size = 0.3,
rel_min_height = 0.01) +
scale_fill_viridis_c(name = "",
option = "D") +
theme(plot.title = element_text(hjust = 0.5,
size = 10),
plot.title.position = "plot") +
labs(y = 'Density',
x = 'Values',
title = 'Distribution of GDP per Capita')
# Military expenditure plot
milit_d <- ggplot(Correlation_overall,
aes(x = MilitaryExpenditurePercentGDP,
y = 1,
fill = ..density..)) +
geom_density_ridges_gradient(scale = 3,
size = 0.3,
rel_min_height = 0.01) +
scale_fill_viridis_c(name = "",
option = "D") +
theme(plot.title = element_text(hjust = 0.5,
size = 10), # Center the title
plot.title.position = "plot") +
labs(y = 'Density',
x = 'Percentage',
title = 'Distribution of Military Expenditure (% of GDP)')
# Internet usage plot
internet_d <- ggplot(Correlation_overall,
aes(x = internet_usage,
y = 1,
fill = ..density..)) +
geom_density_ridges_gradient(scale = 3,
size = 0.3,
rel_min_height = 0.01) +
scale_fill_viridis_c(name = "",
option = "D") +
theme(plot.title = element_text(hjust = 0.5,
size = 10),
plot.title.position = "plot") +
labs(y = 'Density',
x = 'Values',
title = 'Distribution of Internet Usage')
# Arrange the plots in a 2x2 grid
grid.arrange(unempl_d,
gdp_d,
milit_d,
internet_d,
ncol = 2,
nrow = 2)
# set y = 1. This creates a single ridge line for each distribution. -> common approach when you want to visualize the distribution of a single variable without categorizing it by another variable.
# it can be normal for the scale of the density legend in a density plot to have a range that extends beyond the typical range of the variable itself, like going from 0 to 12 in the case of unemployment.rate. This can happen due to the way density is calculated and represented in the plot. Here are a few points to understand why this occurs:
#
# Density Calculation: The density in a density plot is a probability density, not a simple frequency or proportion. It represents the probability per unit on the x-axis (in your case, the unemployment rate). Probability densities can exceed 1 because they are not probabilities themselves but rather rates at which probability is accumulating at each point.
#
# Normalization: Density plots typically normalize the area under the curve to sum up to 1. This normalization can result in density values that are greater than 1, especially if the data is concentrated in a narrow range.
#
# Kernel Smoothing: Density plots use kernel smoothing to estimate the probability density function of a random variable. The height of the density curve (y-axis) depends on how concentrated the data points are around each value of the variable. A high peak (high density value) means a lot of data points are concentrated in that area.
#
# Interpreting Density Values: The actual numerical value of the density on the y-axis is less important than the shape of the density curve. It's the relative height and the area under the curve that provide insights into the data distribution, not the absolute density values.
#
# Different Variables, Different Scales: Each variable will have its own density scale depending on its distribution. Variables with a more concentrated distribution will have higher density values.
#
# In summary, for your unemployment.rate plot, a density scale going up to 12 is a reflection of how the data is distributed and how the density is calculated, and it's not unusual. The key is to focus on the shape and peaks of the density plot to understand the distribution of your data.
# Yes, it can be normal for the scale of the density legend in a density plot to have a range that extends beyond the typical range of the variable itself, like going from 0 to 12 in the case of unemployment.rate. This can happen due to the way density is calculated and represented in the plot. Here are a few points to understand why this occurs:
#
# Density Calculation: The density in a density plot is a probability density, not a simple frequency or proportion. It represents the probability per unit on the x-axis (in your case, the unemployment rate). Probability densities can exceed 1 because they are not probabilities themselves but rather rates at which probability is accumulating at each point.
#
# Normalization: Density plots typically normalize the area under the curve to sum up to 1. This normalization can result in density values that are greater than 1, especially if the data is concentrated in a narrow range.
#
# Kernel Smoothing: Density plots use kernel smoothing to estimate the probability density function of a random variable. The height of the density curve (y-axis) depends on how concentrated the data points are around each value of the variable. A high peak (high density value) means a lot of data points are concentrated in that area.
#
# Interpreting Density Values: The actual numerical value of the density on the y-axis is less important than the shape of the density curve. It's the relative height and the area under the curve that provide insights into the data distribution, not the absolute density values.
#
# Different Variables, Different Scales: Each variable will have its own density scale depending on its distribution. Variables with a more concentrated distribution will have higher density values.
#
# In summary, for your unemployment.rate plot, a density scale going up to 12 is a reflection of how the data is distributed and how the density is calculated, and it's not unusual. The key is to focus on the shape and peaks of the density plot to understand the distribution of your data.All these variables have a right-skewed distribution. Taking the mode into account, most of the concerned countries in our data have an unemployment rate between 2 to 7%, a distribution of GDP per capita between $1’000-$10’000, a distribution of military expenditure in percentage of the GDP 10% to 20% and finally a internet usage between 0 and 10%.
These variables shows us even more the inequalities between the countries in our dataset. While most of our countries have low internet usage or/and a low GDP per capita, few countries are more developed, thus probably more wealthy, and therefore have better chances of getting higher scores.
Now, let’s display the distribution of the different SDG achievement scores per continent, using violin boxplots to have an overview of the mods, the range and the outliers of our observations. To do so, we used the median of each goal per continent.
Code
#### Prepare then plot the Distribution of the SDG by continent ####
# Preparing Africa data
data_Q1_Africa <- data_question1 %>%
filter(data_question1$continent == 'Africa') %>%
dplyr::select(continent, overallscore, goal1, goal2, goal3, goal4, goal5,
goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13,
goal15, goal16, goal17)
data_Q1_Africa_long <- melt(data_Q1_Africa)
medians_AF <- data_Q1_Africa_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
data_Q1_Africa_long <- data_Q1_Africa_long %>%
left_join(medians_AF, by = "variable")
# Preparing America data
data_Q1_Americas <- data_question1 %>%
filter(data_question1$continent == 'Americas') %>%
dplyr::select(continent, overallscore, goal1, goal2, goal3, goal4, goal5,
goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13,
goal15, goal16, goal17)
data_Q1_Americas_long <- melt(data_Q1_Americas)
medians_AM <- data_Q1_Americas_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
data_Q1_Americas_long <- data_Q1_Americas_long %>%
left_join(medians_AM, by = "variable")
# Preparing Asia data
data_Q1_Asia <- data_question1 %>%
filter(data_question1$continent == 'Asia') %>%#filtering Asia as continent
dplyr::select(continent, overallscore, goal1, goal2, goal3, goal4, goal5,
goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13,
goal15, goal16, goal17)
data_Q1_Asia_long <- melt(data_Q1_Asia)
medians_AS <- data_Q1_Asia_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
data_Q1_Asia_long <- data_Q1_Asia_long %>%
left_join(medians_AS, by = "variable")
# Preparing Europe data
data_Q1_Europe <- data_question1 %>%
filter(data_question1$continent == 'Europe') %>% #filtering Europe as continent
dplyr::select(continent, overallscore, goal1, goal2, goal3, goal4, goal5,
goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13,
goal15, goal16, goal17)
data_Q1_Europe_long <- melt(data_Q1_Europe)
medians_EU <- data_Q1_Europe_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
data_Q1_Europe_long <- data_Q1_Europe_long %>%
left_join(medians_EU, by = "variable")
# Preparing Oceania data
data_Q1_Oceania <- data_question1 %>%
filter(data_question1$continent == 'Oceania') %>%
dplyr::select(continent, overallscore, goal1, goal2, goal3, goal4, goal5,
goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13,
goal15, goal16, goal17)
data_Q1_Oceania_long <- melt(data_Q1_Oceania)
medians_OC <- data_Q1_Oceania_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
data_Q1_Oceania_long <- data_Q1_Oceania_long %>%
left_join(medians_OC, by = "variable")
# Merge all medians
medians_all <- rbind(data_Q1_Oceania_long,
data_Q1_Americas_long,data_Q1_Africa_long,
data_Q1_Asia_long,data_Q1_Europe_long)
# Assing a color to each median group
medians_all$color <-
ifelse(medians_all$median_value > 75, "3",
ifelse(medians_all$median_value < 25, "1", '2'))
bandwidth_nrd <- bw.nrd(medians_all$value)
# Plot the SDG Distribution by Continent
ggplot(medians_all,
aes(x = variable,
y = value,
fill = color)) +
geom_violin(trim = FALSE,
bw = bandwidth_nrd) +
scale_fill_viridis_d(name = "",
direction = -1,
option = "D",
labels = c("High median",
"Medium median",
"Low median")) +
labs(title = "SDG Distribution by Continent",
x = "Goals",
y = "Scores",
fill = "Score Category") +
facet_grid(continent ~ .,
scales = "free_y") +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(plot.title = element_text(hjust = 0.5),
plot.title.position = "plot",
axis.text.x = element_text(angle = 90,
vjust = 0.5, hjust=1))We notice that Europe is the continent with most of its goals having a median score superior to 75 (represented by the dark blue color. We notice that only two goals have a median score lower than 25, which is for goal 9 for Africa and goal 10 for America. As seen before, goal 9 is generally having lower scores than the other goals. That could mean that the access to technology and sustainable/resilient infrastructures/industrialization is harder in Africa, due to various reasons such as the economical situation of these countries, corruption,…
The goal 10 concerns the reduction of inequalities within and amongst countries. Therefore, we presume that the investments made in the Americas to implement solutions for getting better scores towards this goal is maybe less effective due to for instance culture difference with other continent, or simply less money spent for solving the issue.
In addition, some distributions are quite dispersed, such as goal 13 in Oceania and goal 10 in Africa. That could show inequalities within countries or again less investment made to raise the scores by different countries of the same continent.
Now let’s display the boxplots for the different variables of the Human Freedom Index.
Code
#### Prepare then plot the Distribution of the HFI by continent ####
# Preparing Africa data
data_Q1_Africa_HFI <- data_question1 %>%
filter(data_question1$continent == 'Africa') %>%
dplyr::select(continent, pf_law, pf_security, pf_movement, pf_religion,
pf_assembly, pf_expression, pf_identity, ef_government,
ef_legal, ef_money, ef_trade, ef_regulation)
data_Q1_Africa_HFI_long <- melt(data_Q1_Africa_HFI)
medians_AF_HFI <- data_Q1_Africa_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
data_Q1_Africa_HFI_long <- data_Q1_Africa_HFI_long %>%
left_join(medians_AF_HFI, by = "variable")
# Preparing America data
data_Q1_Americas_HFI <- data_question1 %>%
filter(data_question1$continent == 'Americas') %>%
dplyr::select(continent, pf_law, pf_security, pf_movement, pf_religion,
pf_assembly, pf_expression, pf_identity, ef_government,
ef_legal, ef_money, ef_trade, ef_regulation)
data_Q1_Americas_HFI_long <- melt(data_Q1_Americas_HFI)
medians_AM_HFI <- data_Q1_Americas_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
data_Q1_Americas_HFI_long <- data_Q1_Americas_HFI_long %>%
left_join(medians_AM_HFI, by = "variable")
# Preparing Asia data
data_Q1_Asia_HFI <- data_question1 %>%
filter(data_question1$continent == 'Asia') %>%
dplyr::select(continent, pf_law, pf_security, pf_movement, pf_religion,
pf_assembly, pf_expression, pf_identity, ef_government,
ef_legal, ef_money, ef_trade, ef_regulation)
data_Q1_Asia_HFI_long <- melt(data_Q1_Asia_HFI)
medians_AS_HFI <- data_Q1_Asia_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
data_Q1_Asia_HFI_long <- data_Q1_Asia_HFI_long %>%
left_join(medians_AS_HFI, by = "variable")
# Preparing Europe data
data_Q1_Europe_HFI <- data_question1 %>%
filter(data_question1$continent == 'Europe') %>%
dplyr::select(continent, pf_law, pf_security, pf_movement, pf_religion,
pf_assembly, pf_expression, pf_identity, ef_government,
ef_legal, ef_money, ef_trade, ef_regulation)
data_Q1_Europe_HFI_long <- melt(data_Q1_Europe_HFI)
medians_EU_HFI <- data_Q1_Europe_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
data_Q1_Europe_HFI_long <- data_Q1_Europe_HFI_long %>%
left_join(medians_EU_HFI, by = "variable")
# Preparing Oceania data
data_Q1_Oceania_HFI <- data_question1 %>%
filter(data_question1$continent == 'Oceania') %>%
dplyr::select(continent, pf_law, pf_security, pf_movement, pf_religion,
pf_assembly, pf_expression, pf_identity, ef_government,
ef_legal, ef_money, ef_trade, ef_regulation)
data_Q1_Oceania_HFI_long <- melt(data_Q1_Oceania_HFI)
medians_OC_HFI <- data_Q1_Oceania_HFI_long %>%
group_by(variable) %>%
summarize(median_value = median(value))
data_Q1_Oceania_HFI_long <- data_Q1_Oceania_HFI_long %>%
left_join(medians_OC_HFI, by = "variable")
# Merge all medians
medians_all_HFI <- rbind(data_Q1_Oceania_HFI_long,
data_Q1_Americas_HFI_long,
data_Q1_Africa_HFI_long,
data_Q1_Asia_HFI_long,
data_Q1_Europe_HFI_long)
# Assing a color to each median value
medians_all_HFI$color <- ifelse(medians_all_HFI$median_value > 7.5, "1",
ifelse(medians_all_HFI$median_value < 2.5, "2", '3'))
bandwidth_nrd_HFI <- bw.nrd(medians_all_HFI$value)
# Plot the distribution of the HFI by continent
ggplot(medians_all_HFI,
aes(x = variable,
y = value,
fill = color)) +
geom_violin(trim = FALSE,
bw = bandwidth_nrd_HFI) +
scale_fill_viridis_d(name = "",
option = "D",
direction = -1,
labels = c(">7.5",
"Between",
"< 2.5")) +
labs(title = "Human Freedom Index Scores Distribution by Continent",
x = "Variables",
y = "Scores",
fill = "Score Category") +
facet_grid(continent ~ .,
scales = "free_y") +
scale_y_continuous(labels = scales::label_number()) +
theme_classic() +
theme(plot.title = element_text(hjust = 0.5),
plot.title.position = "plot",
axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))Here we can notice the same results, except that no score has a median below 25%. Again, Europe is the continent with most of its median scores superior to 75.
For space reason because of the different scales, we have decided not to make violin boxplot per continent for the remaining variables. Nevertheless, their distribution can be seen in the general distribution seen prior to that.
Now, let’s have a closer look to the general correlation between our variables. Using our cleaned dataset, we will use a correlation heatmap to help us vizualising the informations. Given that most of our variables are not normally distributed, we will use the Spearman method to calculate the correlation.
Code
#### Correlations between variables Heatmap ####
# Selecting the variables to be used
Correlation_overall <-data_question1 %>%
dplyr::select(population:ef_regulation)
# Correlation matrix calculation
cor_matrix_sper <-
cor(Correlation_overall,
method = "spearman",
use = "everything")
# Wide to long
cor_melted <-
melt(cor_matrix_sper)
# Adding a new column for hover text
cor_melted$hover_text <- paste("Variable 1: ",
cor_melted$Var1,
"\nVariable 2: ",
cor_melted$Var2,
"\nCorrelation between the variables: ",
round(cor_melted$value, 2))
# Plotting the heatmap
plotly_heatmap <- plot_ly(data = cor_melted,
x = ~Var1,
y = ~Var2,
type = "heatmap",
z = ~value,
text = ~hover_text,
hoverinfo = "text") %>%
layout(title = list(text = 'Correlation Matrix Heatmap',
font = list(size = 17)),
xaxis = list(title = '',
tickangle = -45),
yaxis = list(title = '',
tickangle = -45),
colorbar = list(title = 'Spearman\nCorrelation',
yanchor = "middle",
y = 0.5))
# Display the Plotly plot
plotly_heatmap %>%
config(displayModeBar = FALSE)By looking at our heatmap, we notice that most of our goals are strongely correlated together and that some variables amongst the Human Freedom Index scores too (strong correlation among personal freedom variables (pf), reflecting scores from the Human Freedom Index on movement, religion, assembly, and expression). This could be explained by the fact that some of these goals/scores share partially similar objectifs, which could mean that a raise in the score of one of these goals will raise positively the score of another/some other goals. In addition, we notice that goals 12 and 13 (respectively “responsible consumption & production” and “climate action”, therefore climate oriented goals) are strongely negatively correlated with most of our variables. For the moment, as correlation doesn’t imply causality, we cannot deduce anything from these informations. We will see more in detail the correlations between our goals and variables in the analysis part of the influence of the factors over the Sustainable Development Goals.
3.1.2 Analysis: Influence of the factors over the SDG
In order to answer the first question of our work, let’s start by zooming on the correlation matrix heatmap made in our EDA part. Here are the correlations between the SDG and all the other variables except the SDG.
Code
### Correlation Matrix Heatmap SDG/Other variables ###
# computing p-values of our interested variables
corr_matrix <- cor(data_question1[7:40], method = "spearman", use = "everything")
p_matrix2 <- matrix(nrow = ncol(data_question1[7:40]), ncol = ncol(data_question1[7:40]))
for (i in 1:ncol(data_question1[7:40])) {
for (j in 1:ncol(data_question1[7:40])) {
test_result <- cor.test(data_question1[7:40][, i], data_question1[7:40][, j])
p_matrix2[i, j] <- test_result$p.value
}
}
corr_matrix[which(p_matrix2 > p_value_threshold)] <- NA #only keeping significant pval alpha = 0.05
melted_corr_matrix_GVar <- melt(corr_matrix[19:34,1:18])
ggplot(melted_corr_matrix_GVar, aes(Var1, Var2, fill = value)) +
geom_tile() +
geom_text(aes(label = ifelse(!is.na(value) & abs(value) > threshold_heatmap, sprintf("%.2f", value), '')),
color = "black", size = 2) +
scale_fill_viridis_c(name = "Spearman\nCorrelation",
na.value = "white",
direction = 1,
begin = 0.1,
end = 1) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1),
axis.text.y = element_text(angle = 45, hjust = 1),
plot.title = element_text(hjust = 0.5), # Center the title
plot.title.position = "plot") +
labs(x = 'Variables', y = 'Goals',
title = 'Correlations Heatmap between goals and our other variables')The numbers are representing the significant p-values between our variables. Here, we only displayed the correlations bigger than 0.75, in absolut values. The grey parts are the non significant p-values.
GDP per capita, internet_usage, pf_law or ef_legal are strongely correlated with most of our SDG. This is mostly due to the large scope englobed in these variables. That makes them influence various sectors of our economies and thus, mostly impacting all our SDG goals. Therefore, we can think that these variables have a strong impact on the scores. Nevertheless, as correlation doesn’t mean causality, we still cannot jump to conclusions.
As we can see, our goals 12 & 13 (responsible consumption & production and climate action) are negatively correlated with most of our variables, as is the economic freedom government variable to our SDG. Nevertheless, goals 12 & 13 and ef_government are positively correlated together.
Now let’s zoom on the correlations between all our variables except the SDG:
Code
melted_corr_matrix_Var <- melt(corr_matrix[19:34,19:34])
ggplot(melted_corr_matrix_Var, aes(Var1, Var2, fill = value)) +
geom_tile() +
geom_text(aes(label = ifelse(!is.na(value) & abs(value) > threshold_heatmap, sprintf("%.2f", value), '')),
color = "black", size = 1.7) +
scale_fill_viridis_c(name = "Spearman\nCorrelation",
na.value = "white",
direction = 1,
begin = 0.1,
end = 1) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1),
axis.text.y = element_text(angle = 45, hjust = 1),
plot.title = element_text(hjust = 0.5), # Center the title
plot.title.position = "plot") +
labs(x = 'Variables', y = 'Variables',
title = 'Correlations Heatmap between other variables than SDG')As noticed earlier, there is a strong correlation among personal freedom variables (pf), reflecting scores from the Human Freedom Index on movement, religion, assembly, and expression.
Again, we can see that GDP per capita, pf_law, ef_legal are highly correlated with some other variables. On another hand, we notice that pf_movement, pf_assembly, pf_expression are now also higly correlated with some of the other variables. In addition, we notice than the variables MilitaryExpenditurePercentGDP and ef_governement are negatively correlated with our other variables.
In order to have a look at the influence of our factors over our dependent variables, let’s conduct a Principal Component Analysis.
Code
#### PCA and PCA Scree plot####
#Select our data and effectuate our PCA analysis
myPCA_s <- PCA(data_question1[,25:40], graph = FALSE)
#scree plot
fviz_eig(myPCA_s,
addlabels = TRUE,
linecolor = viridis(1,
option = "B",
begin = 0.5),
barcolor = "black",
barfill = viridis(10,
option = "D",
begin = 0,
end = 0.8)) +
theme_minimal() +
ggtitle(" PCA - Scree plot")
summary(myPCA_s)
#>
#> Call:
#> PCA(X = data_question1[, 25:40], graph = FALSE)
#>
#>
#> Eigenvalues
#> Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6
#> Variance 7.781 2.027 1.155 1.068 0.756 0.686
#> % of var. 48.634 12.671 7.220 6.674 4.723 4.285
#> Cumulative % of var. 48.634 61.305 68.526 75.199 79.922 84.207
#> Dim.7 Dim.8 Dim.9 Dim.10 Dim.11 Dim.12
#> Variance 0.544 0.419 0.387 0.281 0.211 0.196
#> % of var. 3.400 2.621 2.418 1.754 1.317 1.228
#> Cumulative % of var. 87.608 90.228 92.647 94.401 95.718 96.946
#> Dim.13 Dim.14 Dim.15 Dim.16
#> Variance 0.180 0.141 0.105 0.063
#> % of var. 1.127 0.879 0.655 0.393
#> Cumulative % of var. 98.073 98.952 99.607 100.000
#>
#> Individuals (the 10 first)
#> Dist Dim.1 ctr cos2
#> 1 | 3.279 | -0.612 0.002 0.035 |
#> 2 | 3.179 | -0.549 0.002 0.030 |
#> 3 | 3.330 | -0.411 0.001 0.015 |
#> 4 | 3.317 | 0.066 0.000 0.000 |
#> 5 | 3.143 | -0.070 0.000 0.000 |
#> 6 | 2.966 | -0.018 0.000 0.000 |
#> 7 | 2.917 | 0.214 0.000 0.005 |
#> 8 | 3.128 | 0.343 0.001 0.012 |
#> 9 | 2.617 | 0.525 0.002 0.040 |
#> 10 | 2.501 | 0.768 0.003 0.094 |
#> Dim.2 ctr cos2 Dim.3 ctr
#> 1 -1.325 0.039 0.163 | 2.011 0.157
#> 2 -1.340 0.040 0.178 | 2.009 0.157
#> 3 -1.635 0.059 0.241 | 1.889 0.139
#> 4 -1.511 0.051 0.207 | 1.533 0.091
#> 5 -1.374 0.042 0.191 | 1.424 0.079
#> 6 -1.264 0.035 0.182 | 1.373 0.073
#> 7 -1.240 0.034 0.181 | 1.384 0.075
#> 8 -1.450 0.047 0.215 | 1.569 0.096
#> 9 -1.159 0.030 0.196 | 1.150 0.051
#> 10 -0.906 0.018 0.131 | 0.964 0.036
#> cos2
#> 1 0.376 |
#> 2 0.399 |
#> 3 0.322 |
#> 4 0.214 |
#> 5 0.205 |
#> 6 0.214 |
#> 7 0.225 |
#> 8 0.252 |
#> 9 0.193 |
#> 10 0.148 |
#>
#> Variables (the 10 first)
#> Dim.1 ctr cos2 Dim.2 ctr
#> unemployment.rate | 0.092 0.109 0.009 | 0.265 3.472
#> GDPpercapita | 0.760 7.425 0.578 | 0.303 4.537
#> MilitaryExpenditurePercentGDP | -0.221 0.625 0.049 | 0.539 14.347
#> internet_usage | 0.736 6.953 0.541 | 0.398 7.806
#> pf_law | 0.891 10.212 0.795 | 0.237 2.768
#> pf_security | 0.593 4.519 0.352 | 0.267 3.524
#> pf_movement | 0.799 8.211 0.639 | -0.372 6.839
#> pf_religion | 0.658 5.567 0.433 | -0.576 16.385
#> pf_assembly | 0.802 8.259 0.643 | -0.450 10.009
#> pf_expression | 0.869 9.693 0.754 | -0.256 3.221
#> cos2 Dim.3 ctr cos2
#> unemployment.rate 0.070 | 0.859 63.894 0.738 |
#> GDPpercapita 0.092 | -0.282 6.878 0.079 |
#> MilitaryExpenditurePercentGDP 0.291 | 0.396 13.597 0.157 |
#> internet_usage 0.158 | -0.198 3.397 0.039 |
#> pf_law 0.056 | 0.043 0.158 0.002 |
#> pf_security 0.071 | -0.152 2.004 0.023 |
#> pf_movement 0.139 | 0.151 1.969 0.023 |
#> pf_religion 0.332 | 0.167 2.401 0.028 |
#> pf_assembly 0.203 | 0.186 2.995 0.035 |
#> pf_expression 0.065 | 0.094 0.773 0.009 |Code
#### PCA Biplot ####
#Biplot
fviz_pca_biplot(myPCA_s,
label="var",
col.var = viridis(1,
option = "D",
begin = 0.5),
geom="",
pointsize = 0.05,
labelsize = 3,
repel = TRUE) +
theme_minimal() +
ggtitle(" PCA - Biplot")With a eigenvalue bigger than 1 for the four first components, we conclude that there are 4 dimensions to take into account. Nevertheless, again, they are explaining less than 80% of cumulated variance. Therefore, the rule of thumb would suggest us to take 6 dimensions into account.
By looking at the biplot, we notice that only some variables are not following the same trend: MilitaryExpenditurepenrcentGDP, which is negatively correlated with dim1/positively correlated to dim2, ef_government which is negatively correlated with dim1 and 2 and finally unemployment.rate which is slightly positively correlated with dim1 and dim2.
As we can see, the presence of “GDPpercapita,” “Internet_usage,” and “ef_money” close to the X axis suggests that Dim1 may be associated with overall economic development and prosperity, as well as the freedoms that are typically associated with more developed economies, such as freedom of expression (“pf_expression”), movement (“pf_movement”), and assembly (“pf_assembly”). Now concerning Dim2, it could be contrasting socio-economic stability with factors that often increase with instability or conflict, such as higher military expenditure or unemployment rates.
Let’s try now to conduct a cluster analysis, using the Kmean method.
Code
data_kmean_country <- data_question1 %>%
dplyr::select(-c(X,code,year,continent,region, population))
#filter data different than 0 and dropping observations equal to 0
filtered_data <- data_kmean_country %>%
group_by(country) %>%
filter_if(is.numeric, all_vars(sd(.) != 0)) %>%
ungroup()
scale_by_country <- filtered_data %>% #scale data
group_by(country) %>%
summarise_all(~ scale(.))
means_by_country <- scale_by_country %>% #mean by country
group_by(country) %>%
summarise_all(~ mean(., na.rm = TRUE))
rownames(means_by_country) <- seq_along(row.names(means_by_country))
# Your existing elbow plot
elbow_plot <- fviz_nbclust(means_by_country[,-1],
kmeans,
method="wss",
linecolor = viridis(1,
begin = 0.5))
# Add a vertical line at the elbow point (4 clusters)
elbow_plot_with_line <- elbow_plot +
geom_vline(xintercept=4,
linetype="dashed",
color = viridis(1,
option = "B",
begin = 0.5))
print(elbow_plot_with_line)After adapting the data for conducting our cluster analysis, we can see that according the the elbow method that we would only need 4 clusters in our analysis.
Code
kmean <- kmeans(means_by_country[,-1],
4,
nstart = 25)
fviz_cluster(kmean,
data = means_by_country[,-1],
repel=FALSE,
depth =NULL,
ellipse.type = "norm",
labelsize = 10,
pointsize = 0.5)Our cluster analysis gives us one principal cluster (here in purple) –> CENTERED ON 0 BECAUSE AFTER DATA SCALED-> REALLY SMALL VALUES –> HOW TO DEAL WITH IT? I TRIED TO TAKE ONLY HFI INTO ACCOUNT BUT NOT WORKING NEITHER. STILL CENTERED ON 0.
Code
# matrix_means <- as.matrix(means_by_country)
#
# # Transpose the matrix
# transposed_means <- t(matrix_means)
#
# # Convert back to a data frame, setting the row names (goals) as a column
# transposed_means_df <- as.data.frame(transposed_means, row.names = NULL)
#
# # Rename the first column to 'Goals' (assuming the goals are the row names in your original data)
# colnames(transposed_means_df)[1] <- 'Goals'
#
# transposed_means_df <- transposed_means_df[3:18,]
#
# # If the column is a simple list, unlist and convert to numeric
# numeric_vector <- as.numeric(unlist(transposed_means_df))
#
# # If the list is more complex, use sapply to convert each element
# numeric_vector <- sapply(transposed_means_df, function(x) as.numeric(x[[1]]))
#
# # If the entire dataframe is a list of lists, convert each element
# data_kmean_country <- lapply(transposed_means_df, function(x) as.numeric(unlist(x)))
#
#
# # Your existing elbow plot
# elbow_plot <- fviz_nbclust(transposed_means_df,
# kmeans,
# method="wss",
# linecolor = viridis(1,
# begin = 0.5))
#
# # Add a vertical line at the elbow point (4 clusters)
# elbow_plot_with_line <- elbow_plot +
# geom_vline(xintercept=4,
# linetype="dashed",
# color = viridis(1,
# option = "B",
# begin = 0.5))
#
# kmean <- kmeans(transposed_means_df,
# 4,
# nstart = 25)
#
# fviz_cluster(kmean,
# data = transposed_means_df,
# repel=FALSE,
# depth =NULL,
# ellipse.type = "norm",
# labelsize = 10,
# pointsize = 0.5)While considering our regressions, we have noticed that we had high multicolinearity between several dependent variables in our models, by conducting a VIF test. This is due to the numerous variables that we tried to take into account while computing our regressions. Therefore, we have decided to use the stepwise regression with the forward method and choose the model according to the adjusted R squared.
Let’s find the best model for each goal that does not involve severe multicollinearity (VIF > 5) and that has a variance well explained by our independent variable (an adjusted Rsquared over 0.7). Here are for example the selected models for the goals 12 and 17.
Code
reg_goal1 <-
regsubsets(goal1 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion+ pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade+ ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal2 <-
regsubsets(goal2 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal3 <-
regsubsets(goal3 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal4 <-
regsubsets(goal4 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal5 <-
regsubsets(goal5 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal6 <-
regsubsets(goal6 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal7 <-
regsubsets(goal7 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal8 <-
regsubsets(goal8 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal9 <-
regsubsets(goal9 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal10 <-
regsubsets(goal10 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal11 <-
regsubsets(goal11 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal12 <-
regsubsets(goal12 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal13 <-
regsubsets(goal13 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal15 <-
regsubsets(goal15 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal16 <-
regsubsets(goal16 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
reg_goal17 <-
regsubsets(goal17 ~ unemployment.rate + GDPpercapita +
MilitaryExpenditurePercentGDP + internet_usage + pf_law +
pf_security + pf_movement + pf_religion + pf_assembly +
pf_expression + pf_identity + ef_government + ef_legal +
ef_money + ef_trade + ef_regulation + population,
data = data_question1,
nbest=10,
method="forward")
plot(reg_goal1,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal1 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal2,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal2 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal3,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal3 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal4,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal4 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal5,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal5 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal6,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal6 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal7,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal7 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal8,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal8 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal9,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal9 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal10,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal10 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal11,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal11 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal12,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal12 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal13,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal13 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal15,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal15 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal16,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal16 Forward method, adj.R^2")
#> integer(0)
plot(reg_goal17,
scale="adjr2") +
theme_minimal() +
title("Stepwise Regression Goal17 Forward method, adj.R^2")
#> integer(0)Code
plot(reg_goal12, scale="adjr2") + theme_minimal() + title("Stepwise Regression Goal12 Forward method, adj.R^2")
#> integer(0)\[ \begin{split} Goal12\sim \beta_0 &+ GDPpercapita*\beta_1 + PFlaw*\beta_2 + PFreligion*\beta_3 + PFexpression*\beta_4 \\ &+ PFidentity*\beta_5 + EFlegal*\beta_6 + EFtrade*\beta_7 + Population*\beta_8 + ε_i \end{split} \]
Code
plot(reg_goal17, scale="adjr2") + theme_minimal() + title("Stepwise Regression Goal17 Forward method, adj.R^2")
#> integer(0)\[ \begin{split} Goal17\sim \beta_0 &+ unemployment.rate*\beta_1 + MilitaryExpenditurePercentGDP*\beta_2 \\ &+ InternetUsage*\beta_3 + PFlaw*\beta_4 + PFmovement*\beta_5 \\&+ EFgovernment*\beta_6 + EFlegal*\beta_7 + Population*\beta_8 + ε_i \end{split} \]
We can see that for goal12, the adjusted R squared of the model is good, which means that the independent variables fit well for explaining the dependent variable. Nevertheless, concerning goal17, the adjusted R squared is only equal to 0.5. Therefore, the lack of explanation of our variables can be caused by multiple factors, such as the non significant relationship between the chosen independent variables and the goal17, Omitted variable bias or overfitting.
The stepwise regression done gave us 12 models out of 17 with an adjusted R squared of 0.6 and more. Therefore, we need to be careful with our interpretation of our results for the models with lower explanated variance.
Now that we have our optimal model for each goal, let’s conduct our regressions and plot them to see the influence of each factor over our SDG.
Code
#Regressions
Goal1lm <- lm(goal1 ~ unemployment.rate + MilitaryExpenditurePercentGDP
+ internet_usage + pf_religion + pf_assembly + pf_identity
+ ef_government + ef_trade,
data = data_question1)
Goal2lm <- lm(goal2 ~ MilitaryExpenditurePercentGDP + internet_usage
+ pf_identity + ef_money + ef_trade + ef_regulation + population,
data = data_question1)
Goal3lm <- lm(goal3 ~ MilitaryExpenditurePercentGDP + internet_usage
+ pf_movement + pf_religion + pf_identity + ef_legal + ef_money
+ ef_trade,
data = data_question1)
Goal4lm <- lm(goal4 ~ GDPpercapita + internet_usage + pf_religion + pf_identity
+ ef_government + ef_legal + ef_trade + population,
data = data_question1)
Goal5lm <- lm(goal5 ~ MilitaryExpenditurePercentGDP + internet_usage + pf_law
+ pf_security + pf_religion + pf_identity + ef_government
+ ef_legal,
data = data_question1)
Goal6lm <- lm(goal6 ~ unemployment.rate + internet_usage + pf_identity
+ ef_legal + ef_money + ef_trade + ef_regulation + population,
data = data_question1)
Goal7lm <- lm(goal7 ~ unemployment.rate + internet_usage + pf_religion
+ pf_assembly + pf_identity + ef_government + ef_trade
+ ef_regulation,
data = data_question1)
Goal8lm <- lm(goal8 ~ unemployment.rate + internet_usage + pf_law
+ pf_expression + ef_legal + ef_trade + ef_regulation
+ population,
data = data_question1)
Goal9lm <- lm(goal9 ~ + GDPpercapita + MilitaryExpenditurePercentGDP
+ internet_usage + pf_law + ef_legal + ef_trade + ef_regulation
+ population,
data = data_question1)
Goal10lm <- lm(goal10 ~ unemployment.rate + internet_usage + pf_law
+ pf_security + pf_movement + pf_religion + pf_expression
+ population,
data = data_question1)
Goal11lm <- lm(goal11 ~ unemployment.rate + internet_usage + pf_movement
+ pf_religion + pf_identity + ef_legal + ef_trade
+ population,
data = data_question1)
Goal12lm <- lm(goal12 ~ + GDPpercapita + pf_law + pf_religion + pf_expression
+ pf_identity + ef_legal + ef_trade + population,
data = data_question1)
Goal13lm <- lm(goal13 ~ unemployment.rate + GDPpercapita
+ MilitaryExpenditurePercentGDP + pf_law + pf_religion
+ pf_expression + pf_identity + ef_legal,
data = data_question1)
Goal15lm <- lm(goal15 ~ unemployment.rate + MilitaryExpenditurePercentGDP
+ internet_usage + pf_law + pf_religion + ef_government
+ ef_money + population, data = data_question1)
Goal16lm <- lm(goal16 ~ pf_law + pf_security + pf_religion + pf_expression
+ pf_identity + ef_government + ef_legal + population,
data = data_question1)
Goal17lm <- lm(goal17 ~ unemployment.rate + MilitaryExpenditurePercentGDP
+ internet_usage + pf_law + pf_movement + ef_government
+ ef_legal + population,
data = data_question1)
#coefficient plot
# Create a dataframe of tidy models
model_list <- list(Goal1lm, Goal2lm, Goal3lm, Goal4lm, Goal5lm, Goal6lm,
Goal7lm, Goal8lm, Goal9lm, Goal10lm, Goal11lm, Goal12lm,
Goal13lm, Goal15lm, Goal16lm, Goal17lm)
models_tidy <- lapply(model_list, tidy)
names(models_tidy) <- paste("Goal",
c(1:13, 15:17),
"lm",
sep="")
# Combine into a single dataframe
df_tidy <-
do.call(rbind,
lapply(names(models_tidy),
function(x) cbind(models_tidy[[x]],
Model=x)))
# Filter for significant p-values
df_tidy_significant <-
df_tidy[df_tidy$p.value < p_value_threshold, ]
library(RColorBrewer)
model_order <- c("Goal1lm", "Goal2lm", "Goal3lm", "Goal4lm", "Goal5lm",
"Goal6lm", "Goal7lm", "Goal8lm","Goal9lm", "Goal10lm",
"Goal11lm", "Goal12lm", "Goal13lm", "Goal15lm",
"Goal16lm", "Goal17lm")
df_tidy_significant$Model <-
factor(df_tidy_significant$Model, levels = model_order)
myColors <-
colorRampPalette(brewer.pal(11, "Spectral"))(length(unique(df_tidy_significant$term)))
# All models graph
ggplot(df_tidy_significant,
aes(y = Model,
x = estimate,
color = term)) +
geom_point() +
geom_errorbar(aes(xmin = estimate - std.error, xmax = estimate + std.error),
width = 0.2) +
scale_color_manual(values = myColors) +
theme(axis.text.y = element_text(angle = 0, hjust = 1),
plot.title = element_text(hjust = 0.5),
plot.title.position = "plot",
legend.position = "bottom",
legend.text = element_text(size = 6),
legend.title = element_text(size = 7),
legend.key.size = unit(0.4, "cm")) +
labs(title = "Coefficient Plot of Regression Models",
y = "Models",
x = "Estimates")Here is our general plot. For visualization purpose, let’s also plot the positive and negative correlation plots
Code
#positive values only
df_tidy_positive_p <- df_tidy_significant[df_tidy_significant$estimate > 0, ]
df_tidy_positive_p$Model <- factor(df_tidy_positive_p$Model, levels = model_order)
myColors <- colorRampPalette(brewer.pal(11, "Spectral"))(length(unique(df_tidy_positive_p$term)))
# Plot only positive coefficients
ggplot(df_tidy_positive_p,
aes(y = Model,
x = estimate,
color = term)) +
geom_point() +
geom_errorbar(aes(xmin = estimate - std.error, xmax = estimate + std.error),
width = 0.2) +
scale_color_manual(values = myColors) +
theme(axis.text.y = element_text(angle = 0, hjust = 1),
plot.title = element_text(hjust = 0.5),
plot.title.position = "plot",
legend.position = "bottom",
legend.text = element_text(size = 6),
legend.title = element_text(size = 7),
legend.key.size = unit(0.4, "cm")) +
labs(title = "Coefficient Plot of Regression Models (Positive Coefficients)",
y = "Models",
x = "Estimates")
#negative values only
df_tidy_positive_n <-
df_tidy_significant[df_tidy_significant$estimate < 0, ]
df_tidy_positive_n$Model <-
factor(df_tidy_positive_n$Model, levels = model_order)
myColors <-
colorRampPalette(brewer.pal(11, "Spectral"))(length(unique(df_tidy_positive_n$term)))
ggplot(df_tidy_positive_n, aes(y = Model, x = estimate, color = term)) +
geom_point() +
geom_errorbar(aes(xmin = estimate - std.error, xmax = estimate + std.error), width = 0.2) +
scale_color_manual(values = myColors) +
theme(axis.text.y = element_text(angle = 0, hjust = 1),
plot.title = element_text(hjust = 0.5),
plot.title.position = "plot",
legend.position = "bottom",
legend.text = element_text(size = 6),
legend.title = element_text(size = 7),
legend.key.size = unit(0.4, "cm")) +
labs(title = "Coefficient Plot of Regression Models (Negative Coefficients)",
y = "Models",
x = "Estimates")In these plots, we can see that most of our variables are influencing positively and negatively our goals. I.e., unemployment.rate is influencing goal 10 & 8 negatively, but positively our other goals. Nevertheless, some factors seems to influence only positively our goals. It is the case for the factors internet_usage. These results are due to the range of the objectives contained in the SDG: as each goal has a different objective, and that our factors are have also a wide range of action, we find different influence between our variables.
In conclusion, after reviewing which variables are correlating between eachother, after taking care of our multicollinearity problems and doing our regressions on our overall SDG score, we have been finally able to get a broad picture of the influence of our factors over our SDG. Unfortunately, because of poor explanated variance for some regression models, we have to be careful while doing any conclusions from the regressions.
3.2 Focus on the relationships among the SDGs
How are the different SDGs linked? (We want to see if some SDGs are linked in the fact that a high score on one implies a high score on the other, and thus if we can make groups of SDGs that are comparable in that way).
3.2.1 EDA: General visualization of the SDGs
To better analyse the relationships between the SDGs, we will first visualize the correlation between the correlation between the SDGs with the help of a heatmap. We chose to set a threshold at |0.75| to concentrate our analysis on the most linked SDGs. We initially intended to use the Pearson correlation method, but our data is, as seen in previous chapter, not normally distributed. We tried to normalized our data through logarithmic or square root transformation, but it was not sufficient. For that reason, we chose to use th Spearman correlation. While not being an ideal method, the Spearman correlation does not require the data to be normally distributed.
To do that, we select only the columns of interest and compute the correlation matrix using Spearman correlation. We then melt the matrix to be able to plot it. We then plot the heatmap using ggplot2.
Code
#### Heatmap of the correlations ####
# Selecting columns of interest
data_4_goals <- data_4 %>%
dplyr::select(overallscore, goal1, goal2, goal3, goal4, goal5,
goal6, goal7, goal8, goal9, goal10, goal11, goal12,
goal13, goal15, goal16, goal17)
# Initialize matrices for correlations, p-values, and significance
n <- ncol(data_4_goals)
cor_matrix <- matrix(1, n, n)
p_matrix <- matrix(NA, n, n)
colnames(cor_matrix) <- colnames(data_4_goals)
rownames(cor_matrix) <- colnames(data_4_goals)
# Calculating correlations and p-values
for (i in 1:n) {
for (j in 1:n) {
if (i != j) {
test <- cor.test(data_4_goals[[i]], data_4_goals[[j]], method = "spearman")
cor_matrix[i, j] <- test$estimate
p_matrix[i, j] <- test$p.value}}}
# print(cor_matrix)
# All correlation are significant from 0 at alpha = 0.05
# Melting the data for ggplot
melted_corr <- melt(cor_matrix)
# Creating the heatmap
ggplot(melted_corr, aes(x = Var1,
y = Var2,
fill = value)) +
geom_tile() +
geom_text(aes(label = ifelse(abs(value) > threshold_heatmap,
sprintf("%.2f", value),
"")),
vjust = 0.5,
size = 2.5) +
scale_fill_viridis_c(name = "Spearman\nCorrelation",
begin = 0.15) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45,
hjust = 1)) +
labs(title = paste("Heatmap of Spearman Correlations \n(Only correlation values higher ",
threshold_heatmap,
" are shown)",
sep = ""),
x = "",
y = "")The correlation can be read on the graph. The darker the color, the stronger the correlation. If the correlation value is not shown, it means that the goals correlation does not exceed our threshold of ±0.75.
It is evident that the Sustainable Development Goals (SDGs) are intricately interconnected. However, certain goals seems to be less interrelated compared to others. We can see that SDG 1 (No Poverty) and SDG 10 (Reduced Inequalities) have a weaker correlation with the rest of the goals. Similarly, Goal 15 (Life on Land) also have a weacker interconnection with the other SDGs. It is also interesting to note that some goals are negatively correlated with others. For instance, based on the Spearman correlation, goal 12 (Responsable Consumption and Production) and goal 13 (Climate Action) are negatively correlated with the others goals. This suggest that when the higher a goal other than goal 12 or 13 is, the lower the goals 12 and 13 are. Given their similar nature, it is not surprising that they are highly correlated with each other.
3.2.2 Analysis: Factor analysis and Stepwise regression applied to the SDGs
At this point, we saw that the goals were mostly correlated. We now want to see if we can group them in a smaller number of factors. To do that, we will use a principal component analysis (PCA). We will first look at the scree plot to see how many factors we should keep. We will then look at the biplot to see how the goals are grouped together.
Code
#### Scree Plot ####
# Selecting only the goals columns and renaming them
goals_data <- data_4 %>%
dplyr::select(goal1, goal2, goal3, goal4, goal5,
goal6,goal7, goal8, goal9, goal10, goal11, goal12,
goal13, goal15, goal16, goal17) %>%
rename(G1 = goal1, G2 = goal2, G3 = goal3, G4 = goal4, G5 = goal5,
G6 = goal6, G7 = goal7, G8 = goal8, G9 = goal9, G10 = goal10,
G11 = goal11, G12 = goal12, G13 = goal13, G15 = goal15,
G16 = goal16, G17 = goal17)
# Scaling the data and running PCA
goals_data_scaled <- scale(goals_data)
pca_result <- prcomp(goals_data_scaled)
# Plotting Scree plot to visualize the importance of each principal component
fviz_eig(pca_result,
addlabels = TRUE,
linecolor = viridis(1,
option = "B",
begin = 0.5),
barcolor = "black",
barfill = Fix_color) +
ggtitle(" PCA - Scree plot") +
theme_minimal()
# getting the eigenvalues
eigenvalues <- pca_result$sdev^2We see clearly that the first component is the most important one. Guided by the Kaiser criterion, which advises retaining only those components with eigenvalues exceeding 1, the initial three components emerge as candidates with the third components having a eigenvalue of 1.016. We now want to see how those our two first components look on in a biplot.
Code
#### Biplot ####
# Plotting Biplot to visualize the two main dimensions
fviz_pca_biplot(pca_result,
label = "var",
col.var = Fix_color,
geom = "point",
pointsize = 0.1,
labelsize = 3,
repel = TRUE) +
ggtitle(" PCA - Biplot") +
theme_minimal()The biplot offers an interesting visualization that clearly illustrate the relationship between the various goals and the first two components. We can see that the second dimension is mostly correlated with Goals 10 (Reduced inequalities) and 15 (Life on Land). The rest of the variables are more correlated with Dimension 1. With the biplot, we can see three disinct groups of variables, each playing a unique role. We see a group that regroup Goals 12 (Responsible Consumption and Production) and 13 (Climate Action). It is not a surprise to see that they are in the opposite direction from the rest of the variable. This comportment is due to the negative correlation between them and the rest of the variables as discussed previously. Both depicte environmental issues. We also see a group that regroup Goals 10 (Reduced inequalities) and 15 (Life on Land) which is suprising. To group the goal that treat the inequalities (goal 10) and the goal that concern the life on land (goal 15) does not make any sense. The last group regroup the rest of the variables. This categorization helps to understand the distinct influences and interactions between the goals.
We also performed a stepwise regression trying to see more precisely how the goals are correlated with each other. We use the forward selection with the leaps package to do so.
Code
#### Stepwise regression ####
# Selecting only the goals and overallscore columns
goals_data <- data_4 %>% # Selecting only the columns needed
dplyr::select(overallscore, goal1, goal2, goal3, goal4, goal5,
goal6,goal7, goal8, goal9, goal10, goal11, goal12,
goal13, goal15, goal16, goal17)
# Performing a stepwise regression trying to explain each variables with the others
leaps_o <- regsubsets(overallscore ~ goal1 + goal2 + goal3 + goal4 + goal5 +
goal6 + goal7 + goal8 + goal9 + goal10 + goal11 +
goal12 + goal13 + goal15 + goal16 + goal17,
data = goals_data, nbest=1, method = "forward")
leaps_1 <- regsubsets(goal1 ~ goal2 + goal3 + goal4 + goal5 + goal6 + goal7 +
goal8 + goal9 + goal10 + goal11 + goal12 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_2 <- regsubsets(goal2 ~ goal1 + goal3 + goal4 + goal5 + goal6 + goal7 +
goal8 + goal9 + goal10 + goal11 + goal12 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_3 <- regsubsets(goal3 ~ goal1 + goal2 + goal4 + goal5 + goal6 + goal7 +
goal8 + goal9 + goal10 + goal11 + goal12 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_4 <- regsubsets(goal4 ~ goal1 + goal2 + goal3 + goal5 + goal6 + goal7 +
goal8 + goal9 + goal10 + goal11 + goal12 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_5 <- regsubsets(goal5 ~ goal1 + goal2 + goal3 + goal4 + goal6 + goal7 +
goal8 + goal9 + goal10 + goal11 + goal12 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_6 <- regsubsets(goal6 ~ goal1 + goal2 + goal3 + goal4 + goal5 + goal7 +
goal8 + goal9 + goal10 + goal11 + goal12 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_7 <- regsubsets(goal7 ~ goal1 + goal2 + goal3 + goal4 + goal5 + goal6 +
goal8 + goal9 + goal10 + goal11 + goal12 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_8 <- regsubsets(goal8 ~ goal1 + goal2 + goal3 + goal4 + goal5 + goal6 +
goal7 + goal9 + goal10 + goal11 + goal12 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_9 <- regsubsets(goal9 ~ goal1 + goal2 + goal3 + goal4 + goal5 + goal6 +
goal7 + goal8 + goal10 + goal11 + goal12 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_10 <- regsubsets(goal10 ~ goal1 + goal2 + goal3 + goal4 + goal5 + goal6 +
goal7 + goal8 + goal9 + goal11 + goal12 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_11 <- regsubsets(goal11 ~ goal1 + goal2 + goal3 + goal4 + goal5 + goal6 +
goal7 + goal8 + goal9 + goal10 + goal12 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_12 <- regsubsets(goal12 ~ goal1 + goal2 + goal3 + goal4 + goal5 + goal6 +
goal7 + goal8 + goal9 + goal10 + goal11 + goal13 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_13 <- regsubsets(goal13 ~ goal1 + goal2 + goal3 + goal4 + goal5 + goal6 +
goal7 + goal8 + goal9 + goal10 + goal11 + goal12 +
goal15 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_15 <- regsubsets(goal15 ~ goal1 + goal2 + goal3 + goal4 + goal5 + goal6 +
goal7 + goal8 + goal9 + goal10 + goal11 + goal12 +
goal13 + goal16 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_16 <- regsubsets(goal16 ~ goal1 + goal2 + goal3 + goal4 + goal5 + goal6 +
goal7 + goal8 + goal9 + goal10 + goal11 + goal12 +
goal13 + goal15 + goal17, data = goals_data, nbest=1,
method = "forward")
leaps_17 <- regsubsets(goal17 ~ goal1 + goal2 + goal3 + goal4 + goal5 + goal6 +
goal7 + goal8 + goal9 + goal10 + goal11 + goal12 +
goal13 + goal15 + goal16, data = goals_data, nbest=1,
method = "forward")Code
#### Regression found with stepwise regression ####
# Getting all the linear models from the stepwise regression
mod_o <- lm(
overallscore ~ goal2 + goal3 + goal4 + goal6 + goal7 + goal10 + goal15 + goal17,
data = goals_data)
mod_1 <- lm(
goal1 ~ goal3 + goal4 + goal5 + goal6 + goal7 + goal9 + goal13 + goal17,
data = goals_data)
mod_2 <- lm(
goal2 ~ goal4 + goal5 + goal6 + goal8 + goal9 + goal12 + goal16 + goal17,
data = goals_data)
mod_3 <- lm(
goal3 ~ goal1 + goal4 + goal7 + goal8 + goal9 + goal11 + goal15 + goal16,
data = goals_data)
mod_4 <- lm(
goal4 ~ goal1 + goal2 + goal3 + goal5 + goal7 + goal11 + goal16 + goal17,
data = goals_data)
mod_5 <- lm(
goal5 ~ goal1 + goal4 + goal6 + goal9 + goal10 + goal11 + goal15 + goal17,
data = goals_data)
mod_6 <- lm(
goal6 ~ goal1 + goal2 + goal3 + goal5 + goal8 + goal9 + goal11 + goal15,
data = goals_data)
mod_7 <- lm(
goal7 ~ goal1 + goal3 + goal4 + goal5 + goal6 + goal8 + goal11 + goal13,
data = goals_data)
mod_8 <- lm(
goal8 ~ goal2 + goal5 + goal6 + goal9 + goal10 + goal12 + goal15 + goal17,
data = goals_data)
mod_9 <- lm(
goal9 ~ goal1 + goal2 + goal3 + goal8 + goal10 + goal12 + goal13 + goal17,
data = goals_data)
mod_10 <- lm(
goal10 ~ goal1 + goal5 + goal9 + goal11 + goal13 + goal15 + goal16 + goal17,
data = goals_data)
mod_11 <- lm(
goal11 ~ goal3 + goal4 + goal5 + goal6 + goal7 + goal10 + goal15 + goal16,
data = goals_data)
mod_12 <- lm(
goal12 ~ goal2 + goal7 + goal8 + goal9 + goal13 + goal15 + goal16 + goal17,
data = goals_data)
mod_13 <- lm(
goal13 ~ goal1 + goal5 + goal7 + goal9 + goal10 + goal12 + goal15 + goal16,
data = goals_data)
mod_15 <- lm(
goal15 ~ goal3 + goal4 + goal5 + goal6 + goal10 + goal11 + goal12 + goal13,
data = goals_data)
mod_16 <- lm(
goal16 ~ goal2 + goal3 + goal4 + goal10 + goal11 + goal12 + goal13 + goal17,
data = goals_data)
mod_17 <- lm(
goal17 ~ goal1 + goal5 + goal8 + goal9 + goal10 + goal11 + goal12 + goal16,
data = goals_data)Below, we create a graph to present our different model that the stepwise regression chose based on the \(R^2\:adjusted\) as best regression for each of our goals. The \(R^2\:adjusted\) measure the quality of the models and takes into account the number of dependent variables. As expected, the model that explain the variable overallscore is explained by the other goals at a similar coefficient for each of our goals. This make sense since the overallscore has been directly calculated with the 17 scores. We can see that none of the goals seems to be more used in the regression that the others. And that none of our explanatory variables received a high coefficient. We also found that in general the model does not select all variables to explain the given goals. This comes from the high correlation between some goals.
Code
#### Graph of the different models ####
# Create a dataframe with our selected models
model_list <-
list(mod_o, mod_1, mod_2, mod_3, mod_4, mod_5, mod_6, mod_7, mod_8, mod_9,
mod_10, mod_11, mod_12, mod_13, mod_15, mod_16, mod_17)
# Create a dataframe with the coefficients of our models
models_tidy <-
lapply(model_list, tidy)
# Rename the colums to be able to print them afterword on the graph
names(models_tidy) <-
c("Overallscore ~ others", "Goal1 ~ others", "Goal2 ~ others",
"Goal3 ~ others", "Goal4 ~ others", "Goal5 ~ others", "Goal6 ~ others",
"Goal7 ~ others", "Goal8 ~ others", "Goal9 ~ others", "Goal10 ~ others",
"Goal11 ~ others", "Goal12 ~ others", "Goal13 ~ others", "Goal15 ~ others",
"Goal16 ~ others", "Goal17 ~ others")
# Combine into a single dataframe
df_tidy <-
do.call(rbind,
lapply(names(models_tidy),
function(x) cbind(models_tidy[[x]], Model=x)))
# Assuming 'p.value' is the column name for p-values in your dataframe
significance_level <- 0.05
# Filter for significant p-values
df_tidy_significant <-
df_tidy[df_tidy$p.value < significance_level, ]
# Plot graph with all models
ggplot(df_tidy_significant,
aes(y = Model,
x = estimate,
color = term)) +
geom_point() +
xlim(-1, 1) + # Changed from ylim to xlim
theme(axis.text.y = element_text(angle = 0, vjust = 1, size = 8),
axis.text.x = element_text(angle = 45, vjust = 1, size = 8),
legend.position = "bottom",
legend.text = element_text(size = 6.5),
legend.title = element_text(size = 7),
legend.key.size = unit(0.3, "cm")) +
labs(title = "Coefficient Plot of Regression Models",
y = "Models", # Swapped x and y labels
x = "Estimates") +
colors_palThe following graphs show the Residuals versus our fitted values for our model that explained the overallscore and the goal 9. Let’s first look at the model that explain the best the Overallscore variable:
\[ \begin{split} Overallscore\sim \beta_0 &+ Goal~2*\beta_1 + Goal~3*\beta_2 + Goal~4*\beta_3 + Goal~6*\beta_4 \\ &+ Goal~7*\beta_5 + Goal~10*\beta_6 + Goal~15*\beta_7 + Goal~17*\beta_8 \end{split} \]
Code
#### Residuals vs Fitted plot for overallscore ####
# Overallscore ~ others variables
ggplot(mod_o,
aes(x = .fitted,
y = .resid)) +
geom_point(aes(color = abs(.resid)),
size = 0.5) +
scale_color_viridis_c(name = "Residuals",
option = "D") +
geom_hline(yintercept = 0,
linetype = 2,
size = 0.6) +
xlab("Fitted Values") +
ylab("Residuals") +
ggtitle("Residuals vs. Fitted Plot (Overallscore ~ others variables)") +
geom_smooth(se = FALSE,
size = 0.75,
span = 0.95,
method = "loess",
color = viridis(1,
option = "B",
begin = 0.5)) +
theme_minimal() +
theme(legend.position = "none")As the we now from the nature of the overallscore variable. It has been calculated using the 17 goal scores. Hence, the stepwise regression did not struggle to find a model with good results. The \(R^2\:adjusted\) of 0.98 of this model suggest that the model is good quality. As we can see in the first graph, the residuals vs. fitted plot also suggest a good quality. The residuals are well distributed around 0, the residual and the red line, representing the mean residual, is almost flat and very close to 0. Let’s now look at the model that according to our stepwise regression, is the best model at explaining our goal 9 scores with a \(R^2\:adjusted\) of 0.81 but in reality that does not perform well according to the residual versus fitted plot. The predicted model looks as follow:
\[ \begin{split} Goal~9\sim \beta_0 &+ Goal~1*\beta_1 + Goal~2*\beta_2 + Goal~3*\beta_3 + Goal~8*\beta_4 \\ &+ Goal~10*\beta_5 + Goal~12*\beta_6 + Goal~15*\beta_7 + Goal~17*\beta_8 \end{split} \]
Code
#### Residuals vs Fitted plot for goal 9 ####
# Goal 9 ~ others variables
ggplot(mod_9,
aes(x = .fitted,
y = .resid)) +
geom_point(aes(color = abs(.resid)),
size = 0.5) +
scale_color_viridis_c(name = "Residuals",
option = "D") +
geom_hline(yintercept = 0,
linetype = 2,
size = 0.6) +
xlab("Fitted Values") +
ylab("Residuals") +
ggtitle("Residuals vs. Fitted Plot (Goal 9 ~ others variables)") +
geom_smooth(se = FALSE,
size = 0.75,
span = 0.95,
method = "loess",
color = viridis(1,
option = "B",
begin = 0.5)) +
theme_minimal() +
theme(legend.position = "none")As we can see in the graph, the model that tried to explain goal 9 is not as good. The mean residual is not flat and vary a lot. This could suggest non-linearity.
To conclude this part, we have seen some interesting relationships between the SDGs scores. We found that if a country is performing well in some goals, it does not necessary mean that it will perform well in the others. Some goals are negatively correlated. We then interested our-self on regrouping some goals together with a principal components analysis and we found that the analysis suggested to regroup some goals that at first sight, would not have been regrouped together. We also performed some stepwise regression to see how thw individual goals could be explained by the other and found that in general the model does not select all variables to explain the given goals. This comes from the high correlation between some goals.
3.3 Focus on the evolution of SDG scores over time
How has the adoption of the SDGs in 2015 influenced the achievement of SDGs?
We create one new variable per goal that captures the difference in SDG score between the year of the observation and the previous year. This will allow us to see how the countries improve (or not) on SDG scores each year.
Code
data_question2 <- read.csv(here("scripts", "data", "data_question24.csv"))
data_question2 <- data_question2 %>%
dplyr::select(-X)
data_question2 <- data_question2 %>%
group_by(code) %>%
mutate(across(5:21, ~ . - dplyr::lag(.), .names = "diff_{.col}")) %>%
ungroup()3.3.1 EDA: General time evolution of SDG socres
First, we look at the evolution of SDG achievement overall score over time by continent and by region and we see that the general evolution of SDG scores around the world is increasing over the years, but very slowly. We also plot the average improvements/decrease in overall score acros the years. Looking at the continents, we see that Europe is above the others, while Africa is below, but in general, all have increasing overall scores. In addition, we see that the different continent have quite steady score difference between the years, except Oceania that has greater fluctuations and has sometimes score decrease, for instance in 2010. But it also comes with the higher average increase in 2014. We also observe that toward the latest years (2021-2022), the improvements are smaller and that decrease are more frequent. However, we must keep in mind that all are very small differences, indeed, the higher improvement is a little above 1 point of percentage of the overall score in one year.
Code
#### Mean overall SDG score and score difference by year ####
data1 <- data_question2 %>%
group_by(year, continent) %>%
mutate(mean_overall_score_by_year = mean(overallscore))
# Mean overall SDG achievement score by year plot
plot1 <- ggplot(data1) +
geom_line(mapping = aes(x = year,
y = mean_overall_score_by_year,
color = continent),
lwd = 0.6) +
scale_y_continuous(limits = c(0, 100)) +
labs(title = "Mean overall SDG \nachievement score",
y = "Mean Overall SDG Score",
x = "Year"
) +
theme(legend.position = "none",
plot.title = element_text(hjust = 0.5, size = 10),
axis.title.x = element_text(size = 8),
axis.title.y = element_text(size = 8)) +
colors_pal
data2 <- data_question2 %>%
group_by(year, continent) %>%
mutate(mean_diff_overall_score_by_year = mean(diff_overallscore))
# Mean score difference by year plot
plot2 <- ggplot(data2) +
geom_line(mapping=aes(x=year,
y=mean_diff_overall_score_by_year,
color=continent),
lwd=0.6) +
geom_hline(yintercept = 0,
linetype = "dashed",
color = "black") +
scale_y_continuous(limits = c(-0.4, 1.2)) +
labs(title = "Score difference",
y = "Mean Overall SDG Score difference",
x = "Year"
) +
theme(legend.position = "right",
plot.title = element_text(hjust = 0.5, size = 10),
axis.title.x = element_text(size = 8),
axis.title.y = element_text(size = 8),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10)) +
colors_pal
plot1 + plot2This view that groups the countries by region gives us precision about the previous information. Indeed, it is Western Europe that is particularly above and Sub-Saharan Africa that is clearly below. Regarding the score difference from one year to another, we still see Oceania having the greater fluctuations, but Caucasus & Central Asia as well as Eastern Europe show a pick in the early two thousand’s that we could not see before. South Asia also has a relatively high improvement in 2017.
Code
#### Evoloution of the mean overall score and socre diff by region ####
data3 <- data_question2 %>%
group_by(year, region) %>%
mutate(mean_overall_score_by_year=mean(overallscore))
# Evolution of the mean overall SDG achievement score plot
plot3 <- ggplot(data3) +
geom_line(mapping=aes(x = year,
y = mean_overall_score_by_year,
color = region),
lwd = 0.5) +
scale_y_continuous(limits = c(0, 100)) +
labs(title = "Evolution of the mean \noverall SDG achievement score",
y = "Mean Overall SDG Score",
x = "Year") +
theme(legend.position="none",
plot.title = element_text(hjust = 0.5, size = 10),
axis.title.x = element_text(size = 8),
axis.title.y = element_text(size = 8),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10)) +
colors_pal
data4 <- data_question2 %>%
group_by(year, region) %>%
mutate(mean_diff_overall_score_by_year = mean(diff_overallscore))
# Evolution of the mean score difference plot
plot4 <- ggplot(data4) +
geom_line(mapping=aes(x = year,
y = mean_diff_overall_score_by_year,
color = region),
lwd = 0.5) +
geom_hline(yintercept = 0,
linetype = "dashed",
color = "black") +
scale_y_continuous(limits = c(-0.4,1.2)) +
labs(title = "Score difference",
y = "Mean Overall SDG Score difference",
x = "Year"
) +
theme(legend.position = "none",
plot.title = element_text(hjust = 0.5, size = 10),
axis.title.x = element_text(size = 8),
axis.title.y = element_text(size = 8),
legend.text = element_text(size = 8),
legend.title = element_text(size = 10)) +
colors_pal
# Creation of the legend
legend <- ggplot(data4) +
geom_line(mapping=aes(x = year,
y = mean_diff_overall_score_by_year,
color = region),
lwd = 0.5) +
theme(legend.position = "bottom",
legend.text = element_text(size = 6),
legend.title = element_text(size = 8))+
guides(color = guide_legend(nrow = 4)) +
colors_pal
legend <- cowplot::get_legend(legend)
(plot3 + plot4) / legendSecond, we look at the evolution of SDG achievement scores(16) over time for the whole world and by continent. We notice that all SDGs except from goal 9 (industry innovation and infrastructure) are close to one another in terms of level and growth. Goal 9 starts far below the others in 2000 and growths faster until exceeding 50%. In addition, some goals did not increase their scores much in the last two decades, for example goal 13 (climate action) and goal 12 (responsible consumption and production). The score differences are mostly contained between 0 and 0.5 point of percentage increase by year. Some of the goals have picks, these are goals 15, 3, 5, 17 and goal 9 has the highest of all average improvements in 2017 with an improvement of 4 points of percentage, which is almost two times higher than the other good improvements. Some goals have bad years, like goal 10 or goal 15, but never under -0.5, except goal 16 in 2022 that goes a little below. Finaly, some goals are very steady, for example goal 12 that stays around zero and goal 6 that is always a little above the zero (no change) line.
Code
#### Evolution of the mean scores and score difference by year ####
data5 <- data_question2 %>%
group_by(year) %>%
summarise(across(starts_with("goal"), mean, na.rm=TRUE)) %>%
pivot_longer(cols = starts_with("goal"), names_to = "goal", values_to = "mean_value")
# Evolution of the mean SDG achievement scores across the world plot
plot5 <- ggplot(data = data5) +
geom_line(mapping = aes(x = year, y = mean_value, color = goal), size = 0.7) +
geom_point(mapping = aes(x = year, y = mean_value, color = goal), size = 1) +
colors_pal +
scale_y_continuous(limits = c(0, 100)) +
labs(title = "Evolution of the mean SDG \nachievement scores across the world",
y = "Mean SDG Scores",
x = "Year"
) +
theme(legend.position="none", plot.title = element_text(hjust=0.5, size= 10), axis.title.x = element_text(size= 8), axis.title.y = element_text(size= 8))
data6 <- data_question2 %>%
group_by(year) %>%
summarise(across(starts_with("diff_goal"), mean, na.rm=TRUE)) %>%
pivot_longer(cols = starts_with("diff_goal"), names_to = "goal", values_to = "mean_diff_value")
# Score difference plot
plot6 <- ggplot(data = data6) +
geom_line(mapping = aes(x = year,y = mean_diff_value, color = goal), size = 0.3) +
#geom_point(mapping = aes(x = year, y = mean_diff_value, color = goal), size = 1) +
geom_hline(yintercept = 0, linetype = "dashed", color = "black") +
colors_pal +
scale_y_continuous(limits = c(-1.5, 4)) +
labs(title = "Score difference",
y = "Mean SDG Scores difference",
x = "Year"
) +
theme(legend.position="none", plot.title = element_text(hjust=0.5, size= 10), axis.title.x = element_text(size= 8), axis.title.y = element_text(size= 8), legend.text = element_text(size= 6),legend.title = element_text(size= 10))
# Creation od the legend
plot7 <- ggplot(data = data5) +
geom_line(mapping = aes(x = year, y = mean_value, color = goal), size = 1) +
colors_pal +
theme(legend.position="bottom", legend.text = element_text(size= 8), legend.title = element_blank())+
guides(color = guide_legend(nrow=3))
legend <- cowplot::get_legend(plot7)
(plot5 + plot6) / legendWe continue with the graph that distinguishes continents to get more information.
Code
#### Evolution of the mean scores by continent ####
data5 <- data_question2 %>%
group_by(year, continent) %>%
summarise(across(starts_with("goal"), mean, na.rm=TRUE)) %>%
pivot_longer(cols = starts_with("goal"), names_to = "goal", values_to = "mean_value")
# Creation of the plot
ggplot(data = data5) +
geom_line(mapping = aes(x = year,
y = mean_value,
color = continent),
size = 0.7) +
scale_y_continuous(limits = c(0, 100)) +
labs(title = "Evolution of the mean SDG achievement scores by continent",
y = "Mean SDG Scores",
x = "Year"
) +
facet_wrap(~ goal, nrow = 4) +
theme_light() +
theme(axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1, size = 6),
plot.title = element_text(hjust=0.5, size=10)) +
colors_palWe observe that most of the time, Europe is at the top of the graph and Africa at the bottom, except for goals 12 and 13 that are linked to ecology. Some other information stand out:
Americas are far behind the other parts of the world regarding goal 10: reduced inequalities.
Africa is far behind the other continents (even if becoming better) for goals 1, 3, 4 and 7.
Goal 9 (industry, innovation and infrastructure) show exponential growth for almost all continents.
Third we create an interactive map of the world to be able to navigate from year 2000 to 2022, seeing the level of achievement of the SDGs (overall score) for each country.
Code
#### Interactive map of the world showing the scores by contries ####
# Load world map data
world <- ne_countries(scale = "medium",returnclass = "sf")
# Merge data with the world map data
data0 <- merge(world, data_question2, by.x = "iso_a3", by.y = "code", all.x = TRUE)
data0 <- data0 %>%
filter(!is.na(overallscore))
unique_years <- unique(data0$year)
plot_ly(
type = "choropleth",
z = ~data0$overallscore[data0$year == 2000],
locations = ~data0$iso_a3[data0$year == 2000],
text = ~paste("Country: ",
data0$name[data0$year == 2000],
"<br>Overall Score: ",
data0$overallscore[data0$year == 2000]),
# colors = viridis(4, direction = -1),
colors = c("darkred",
"orange",
"yellow",
"darkgreen"),
colorbar = list(title = "Overall Score",
cmin = 40,
cmax = 87),
zmin = 40,
zmax = 87,
hoverinfo = "text") %>%
layout(
title = "SDG overall score evolution",
sliders = list(
list(
active = 0,
currentvalue = list(prefix = "Year: "),
steps = lapply(seq_along(unique_years), function(i) {
year <- unique_years[i]
list(
label = as.character(year),
method = "restyle",
args = list(
list(
z = list(data0$overallscore[data0$year == year]),
locations = list(data0$iso_a3[data0$year == year]),
text = list(~paste("Country: ",
data0$name[data0$year == year],
"<br>Overall Score: ",
data0$overallscore[data0$year == year]))
)
)
)
})
)
)
)Again, we see that the overall achievement score of the SDGs is increasing and that the countries that have the most red (bad score) are in Africa. However it is also there that it increases more rapidly. Our hypothesis is that when a score is very low, it is easier to make it better than when it becomes very high (around 90%) it may be hard to increase it, because it would mean perfection. In the next section, we will further investigate this idea.
3.3.2 Analysis: SDG adoption in 2015
Preparing for the specific question around 2015, we only keep the years from 2009 to 2022 (7 years before and after 2015).In addition, we create a binary variable that take the value 1 if the observation occurred after 2015 and zero otherwise.
Code
# Create a new column (binary variable) with value 1 if the year is after 2015 and zero otherwise.
binary2015 <- data_question2 %>%
mutate(after2015 = ifelse(year > 2015, 1, 0)) %>%
filter(as.numeric(year)>=2009)We begin by looking at the distribution of the difference in SDG scores from one year to the next (improvement if it is above zero and deterioration if it is below zero).
Code
# histogram of difference in scores between years
unique_years <- unique(binary2015$year)
plot_ly() %>%
add_trace(
type = "histogram",
data = binary2015,
x = ~diff_overallscore[year == 2009],
marker = list(color = Fix_color, line = list(color = "black", width = 1))
) %>%
layout(
title = "Distribution of SDG evolution",
xaxis = list(title = "Year difference SDG score", range = c(-3, 3)),
yaxis = list(title = "Frequency", range = c(0, 40)),
sliders = list(
list(
active = 0,
currentvalue = list(prefix = "Year: "),
steps = lapply(seq_along(unique_years), function(i) {
year <- unique_years[i]
list(
label = as.character(year),
method = "restyle",
args = list(
list(x = list(binary2015$diff_overallscore[binary2015$year == year]))
)
)
})
)
)
)We notice that across the years, the distribution stays on the right of the x-axis, which means that there are more improvement than deterioration. If there is deterioration, it is less than one percent per year, except some extreme cases, for instance in 2013, there was almost a 3% decrease in the overall SDG score of one country. It is also rare to see improvements of more than 2% per year. Regarding our specific question, we do not see a major improvement of the distribution after 2015, if it was the case we would see the distribution going more to the right, but except for 2017, there are more and more values centered around zero, which means less score improvements overall.
After having visualized the improvements and declines of SDG overall score for the whole world, we are now interested in the top 5 countries in terms of improvement each year and we see that major improvement often comes from Sub-Saharan Africa countries or Middle East and North Africa. This confirms that more efforts are made in these regions to achieve better scores, but we also know from our previous visualizations that their initial scores are lower. Moreover, we record that the higher improvements are of 3% per year and were mostly achieved before 2015. Indeed, we can tell that in terms of maximum improvements, the adoption of SDGs in 2015 did not have a strong impact. We also notice that 2020 is the year with the smallest best improvements. We keep that in mind for the next question regarding events and specifically COVID.
Code
top_n_values <- 5
# Test with ggpot2
custom_colors <- c(viridis(5, begin = 0.2), "lightblue", "grey90", magma(3))
# Get unique regions in the dataset
unique_regions <- unique(binary2015$region)
# Create a color dictionary mapping each region to a specific color
region_colors <- setNames(custom_colors[1:length(unique_regions)], unique_regions)
library(patchwork)
plots <- list()
for (year in unique_years) {
top_countries <- binary2015[binary2015$year == year, ] %>%
arrange(desc(year), desc(diff_overallscore)) %>%
head(n = top_n_values)
plot <- ggplot(data = top_countries,
mapping = aes(x = country,
y = diff_overallscore,
fill = region)) +
geom_bar(stat = "identity") +
scale_fill_manual(values = region_colors) + # Use the specified colors
labs(title = paste(year),
x = NULL,
y = NULL) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1, size= 6),
axis.text.y = element_text(size= 6),
legend.position = "none",
plot.title = element_text(size = 10)) +
scale_y_continuous(limits = c(0, 3.7))
plots[[as.character(year)]] <- plot
}
wrap <- wrap_plots(plots, ncol = 5)
wrap+ plot_annotation(
title = 'Best 5 countries in terms of SDG score improvement by region'
)Code
legend_data <- data.frame(region = unique_regions)
legend_plot <- ggplot(legend_data, aes(x = region, fill = region)) +
geom_bar() +
scale_fill_manual(values = region_colors) +
theme(legend.position="top",
legend.text = element_text(size = 6),
legend.title = element_blank(), legend.key.size = unit(0.3, "cm"))+
guides(fill = guide_legend(nrow=2))
legend <- cowplot::get_legend(legend_plot)
grid.newpage()
grid.draw(legend)We continue by looking at the worst 5 countries in terms of decline in SDG overall score each year and we see that the years with the worst declines are those closer to us. Indeed the declines were generally no more than 1%, until 2018, where these became more frequent. We notice that the adoption of SDGs in 2015 may have had a good impact, because during the two years that follow, the worst SDG score declines were low (no more than 1% in 2016 and no more 0.5% in 2017). It was stabilizing, but it was of short duration, because then come the more extreme deteriotations. Interestingly, the regions that had were the worst in terms of decline during the past twelve years were very different, the only pattern appears during the last four years, where most of them are in Latin America and the Caribbean.
Code
plots <- list()
for (year in unique_years) {
top_countries <- binary2015[binary2015$year == year, ] %>%
arrange(desc(year), diff_overallscore) %>%
head(n = top_n_values)
plot <- ggplot(data = top_countries,
mapping = aes(x = country,
y = diff_overallscore,
fill = region)) +
geom_bar(stat = "identity") +
scale_fill_manual(values = region_colors) + # Use the specified colors
labs(title = paste(year),
x = NULL,
y = NULL) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45,vjust = 1, hjust = 1, size=6),
axis.text.y = element_text(size= 6), legend.position = "none",
plot.title = element_text(size = 10)) +
scale_y_continuous(limits = c(-3,0))
plots[[as.character(year)]] <- plot
}
# Arrange the plots in a 4x4 grid using patchwork
wrap <- wrap_plots(plots, ncol = 5)
wrap + plot_annotation(
title = 'Worst 5 countries in terms of SDG score improvement'
)Code
grid.draw(legend)We move on to the specific SDG scores and look at the 20 best improvements by score. We additionaly differentiate between the improvements than occurred before and after 2015. We want to see which goals get the best improvements and which countries put more effort into it.
Code
# Best improvements
data_long <- binary2015 %>%
pivot_longer(cols = c(starts_with("diff_goal"), "diff_overallscore"),
names_to = "goal",
values_to = "improvement") %>%
group_by(goal) %>%
top_n(20, wt = improvement) %>%
ungroup()
plot_ly() %>%
add_trace(
type = "bar",
data = data_long,
x = ~country[after2015 == 1 & goal == "diff_overallscore"],
y = ~improvement[after2015 == 1 & goal == "diff_overallscore"],
legendgroup = "after 2015",
name = "after 2015",
marker = list(color = Fix_color,
size = 10),
showlegend = TRUE
) %>%
add_trace(
type = "bar",
x = ~country[after2015 == 0 & goal == "diff_overallscore"],
y = ~improvement[after2015 == 0 & goal == "diff_overallscore"],
legendgroup = "before 2015",
name = "before 2015",
marker = list(color = viridis(1),
size = 10),
showlegend = TRUE
) %>%
layout(
title = paste("Top 20 countries per SDG Score evolution"),
yaxis = list(title = "Year difference SDG score", range = c(0, 50)),
xaxis = list(title = "Countries", categoryorder = "total ascending", tickangle = -45),
barmode = "stack",
updatemenus = list(
list(
buttons = list(
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_overallscore"],
~improvement[after2015 == 0 & goal == "diff_overallscore"]
),
x = list(
~country[after2015 == 1 & goal == "diff_overallscore"],
~country[after2015 == 0 & goal == "diff_overallscore"]
)
)
),
label = "Overall score",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal1"],
~improvement[after2015 == 0 & goal == "diff_goal1"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal1"],
~country[after2015 == 0 & goal == "diff_goal1"]
)
)
),
label = "Goal 1: \nno poverty",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal2"],
~improvement[after2015 == 0 & goal == "diff_goal2"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal2"],
~country[after2015 == 0 & goal == "diff_goal2"]
)
)
),
label = "Goal 2: \nzero hunger",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal3"],
~improvement[after2015 == 0 & goal == "diff_goal3"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal3"],
~country[after2015 == 0 & goal == "diff_goal3"]
)
)
),
label = "Goal 3: good health \nand well-being",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal4"],
~improvement[after2015 == 0 & goal == "diff_goal4"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal4"],
~country[after2015 == 0 & goal == "diff_goal4"]
)
)
),
label = "Goal 4: \nquality education",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal5"],
~improvement[after2015 == 0 & goal == "diff_goal5"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal5"],
~country[after2015 == 0 & goal == "diff_goal5"]
)
)
),
label = "Goal 5: \ngender equality",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal6"],
~improvement[after2015 == 0 & goal == "diff_goal6"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal6"],
~country[after2015 == 0 & goal == "diff_goal6"]
)
)
),
label = "Goal 6: clean water \nand sanitation",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal7"],
~improvement[after2015 == 0 & goal == "diff_goal7"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal7"],
~country[after2015 == 0 & goal == "diff_goal7"]
)
)
),
label = "Goal 7: affordable \nand clean energy",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal8"],
~improvement[after2015 == 0 & goal == "diff_goal8"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal8"],
~country[after2015 == 0 & goal == "diff_goal8"]
)
)
),
label = "Goal 8: decent work \nand economic growth",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal9"],
~improvement[after2015 == 0 & goal == "diff_goal9"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal9"],
~country[after2015 == 0 & goal == "diff_goal9"]
)
)
),
label = "Goal 9: industry, innovation \nand infrastructure",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal10"],
~improvement[after2015 == 0 & goal == "diff_goal10"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal10"],
~country[after2015 == 0 & goal == "diff_goal10"]
)
)
),
label = "Goal 10: \nreduced inequalities",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal11"],
~improvement[after2015 == 0 & goal == "diff_goal11"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal11"],
~country[after2015 == 0 & goal == "diff_goal11"]
)
)
),
label = "Goal 11: sustainable \ncities and communities",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal12"],
~improvement[after2015 == 0 & goal == "diff_goal12"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal12"],
~country[after2015 == 0 & goal == "diff_goal12"]
)
)
),
label = "Goal 12: responsible \nconsumption and production",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal13"],
~improvement[after2015 == 0 & goal == "diff_goal13"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal13"],
~country[after2015 == 0 & goal == "diff_goal13"]
)
)
),
label = "Goal 13: \nclimate action",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal15"],
~improvement[after2015 == 0 & goal == "diff_goal15"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal15"],
~country[after2015 == 0 & goal == "diff_goal15"]
)
)
),
label = "Goal 15: \nlife on earth",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal16"],
~improvement[after2015 == 0 & goal == "diff_goal16"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal16"],
~country[after2015 == 0 & goal == "diff_goal16"]
)
)
),
label = "Goal 16: peace, justice \nand strong institutions",
method = "restyle"
),
list(
args = list(
list(
y = list(
~improvement[after2015 == 1 & goal == "diff_goal17"],
~improvement[after2015 == 0 & goal == "diff_goal17"]
),
x = list(
~country[after2015 == 1 & goal == "diff_goal17"],
~country[after2015 == 0 & goal == "diff_goal17"]
)
)
),
label = "Goal 17: partnerships \nfor the goals",
method = "restyle"))))) %>%
config(displayModeBar = FALSE)We notice various patterns, among them:
Goals 2 (zero hunger), 3 (good health and well-being), 6 (clean water and sanitation), 8 (decent work and economic growth), 12 (responsible consumption and production), 16 (peace, justice and strong institutions) have very low improvements per year. Indeed, even the best ones are below 10%.
Goal 10 (reduced inequalities) has the best improvements, all 20 best improvements are above 20% and it goes up to 45%.
Some goals clearly had most of their best improvements before 2015: goals 3 (good health and well-being), 5 (gender equality), 6 (clean water and sanitation), 7 (affordable and clean energy).
Some goals clearly had most of their best improvements after 2015: goals 8 (decent work and economic growth), 12 (responsible consumption and production).
Goal 9 (industry, innovation and infrastructure) has all of its 20 best improvements after 2015.
Regarding the impact of the adoption of SDGs in 2015, we can not tell that it had a positive impact, because there are not more big improvements after 2015 than before, even a little bit less. In addition, the most impressive improvements mostly occurred before 2015. These conclusions are supported by the next graph: we fit two different regression lines (before and after 2015) to see if there is a jump after the adoption of the SDGs and if the the SDG scores increase faster. We decided to cut the y-axis in order to have a better visual of the different scores. Since the regressions lines (taking into account all of the goals) go between 30% and 85% we only kept those values.
Code
# Graphs to show the jump (or not) in 2015
# Filter data
data_after_2015 <- filter(binary2015, as.numeric(year) >= 2015)
data_before_2016 <- filter(binary2015, as.numeric(year) <= 2015)
# Different patterns across SDGs before and after 2015 plotly
plotly::plot_ly() %>%
plotly::add_trace(data = data_after_2015,
x = ~year,
y = ~fitted(lm(overallscore ~ year,
data = data_after_2015)),
type = 'scatter',
mode = 'lines',
line = list(color = Fix_color,
width = 3),
name = "After 2015") %>%
plotly::add_trace(data = data_before_2016,
x = ~year,
y = ~fitted(lm(overallscore ~ year,
data = data_before_2016)),
type = 'scatter',
mode = 'lines',
line = list(color = viridis(1,
direction = -1,
end = 0.9),
width = 3),
name = "Before 2015") %>%
plotly::layout(title = "Different patterns across SDGs before and after 2015",
xaxis = list(title = "Year"),
yaxis = list(title = "SDG achievement score",
range = c(30, 90)),
shapes = list(
list(
type = 'line',
x0 = 2015,
x1 = 2015,
y0 = 0,
y1 = 1,
yref = 'paper',
line = list(color = viridis(1,
option = "B",
begin = 0.5) ,
width = 3,
dash = 'dot')
)
),
updatemenus = list(
list(
buttons = list(
list(
args = list("y", list(
~fitted(lm(overallscore ~ year, data = data_after_2015)),
~fitted(lm(overallscore ~ year, data = data_before_2016))
)),
label = "Overall score",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal1 ~ year, data = data_after_2015)),
~fitted(lm(goal1 ~ year, data = data_before_2016))
)),
label = "Goal 1: \nno poverty",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal2 ~ year, data = data_after_2015)),
~fitted(lm(goal2 ~ year, data = data_before_2016))
)),
label = "Goal 2: \nzero hunger",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal3 ~ year, data = data_after_2015)),
~fitted(lm(goal3 ~ year, data = data_before_2016))
)),
label = "Goal 3: good health \nand well-being",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal4 ~ year, data = data_after_2015)),
~fitted(lm(goal4 ~ year, data = data_before_2016))
)),
label = "Goal 4: \nquality education",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal5 ~ year, data = data_after_2015)),
~fitted(lm(goal5 ~ year, data = data_before_2016))
)),
label = "Goal 5: \ngender equality",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal6 ~ year, data = data_after_2015)),
~fitted(lm(goal6 ~ year, data = data_before_2016))
)),
label = "Goal 6: clean water \nand sanitation",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal7 ~ year, data = data_after_2015)),
~fitted(lm(goal7 ~ year, data = data_before_2016))
)),
label = "Goal 7: affordable \nand clean energy",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal8 ~ year, data = data_after_2015)),
~fitted(lm(goal8 ~ year, data = data_before_2016))
)),
label = "Goal 8: decent work \nand economic growth",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal9 ~ year, data = data_after_2015)),
~fitted(lm(goal9 ~ year, data = data_before_2016))
)),
label = "Goal 9: industry, innovation \nand infrastructure",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal10 ~ year, data = data_after_2015)),
~fitted(lm(goal10 ~ year, data = data_before_2016))
)),
label = "Goal 10: \nreduced inequalities",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal11 ~ year, data = data_after_2015)),
~fitted(lm(goal11 ~ year, data = data_before_2016))
)),
label = "Goal 11: sustainable \ncities and communities",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal12 ~ year, data = data_after_2015)),
~fitted(lm(goal12 ~ year, data = data_before_2016))
)),
label = "Goal 12: responsible \nconsumption and production",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal13 ~ year, data = data_after_2015)),
~fitted(lm(goal13 ~ year, data = data_before_2016))
)),
label = "Goal 13: \nclimate action",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal15 ~ year, data = data_after_2015)),
~fitted(lm(goal15 ~ year, data = data_before_2016))
)),
label = "Goal 15: \nlife on earth",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal16 ~ year, data = data_after_2015)),
~fitted(lm(goal16 ~ year, data = data_before_2016))
)),
label = "Goal 16: peace, justice \nand strong institutions",
method = "restyle"
),
list(
args = list("y", list(
~fitted(lm(goal17 ~ year, data = data_after_2015)),
~fitted(lm(goal17 ~ year, data = data_before_2016))
)),
label = "Goal 17: partnerships \nfor the goals",
method = "restyle"))))) %>%
config(displayModeBar = FALSE)We notice various patterns, among them:
Goals 1, 4, 3, and 15 increase faster before 2015 than after.
Except for goal 17, none seem to increase faster after the adoption of SDGs. Since goal 17 is about collaboration of the countries for SDGs achievement, it is no surprise that before the adoption, there were no increase. It is thus disappointing to see that it is the only goal that has a improvement rate after 2015.
Goal 17 also has a small downward jump in 2015, but since it immediately increases in the following years, it is due to the fitting of the lines.
We observe small upwards jumps for goals 8, 9, 10 and 11.
To sum up, the adoption of SDGs was a success in terms of collaboration between countries to better themselves on some aspects of durability (goal 17), but regarding the goals themselves, we can not conclude to faster improvements or radical efforts following 2015.
3.4 Focus on the influence of events over the SDG scores
#> Variable W_Value P_Value
#> 1 overallscore 0.9765 7.05e-24
#> 2 total_affected 0.0643 1.26e-85
#> 3 total_deaths 0.0342 2.50e-86
The value of W are very close to zero (W = 0.06 and W=0.03) for the disasters variables, this suggests that the data diverge considerably from a normal distribution.
#> Variable W_Value P_Value
#> 1 cases_per_million 0.177 1.90e-83
#> 2 deaths_per_million 0.228 4.56e-82
#> 3 stringency 0.402 1.24e-76
The value of W are close to zero (W = 0.2, W=0.2 and W=0.4) for the Covid-19 variables, this suggests that the data diverge from a normal distribution.
#> Variable W_Value P_Value
#> 1 ongoing 0.433 7.28e-69
#> 2 sum_deaths 0.137 2.28e-77
#> 3 pop_affected 0.327 2.93e-72
#> 4 area_affected 0.288 2.14e-73
#> 5 maxintensity 0.453 3.69e-68
The value of W are close to zero (W = 0.433, W= 0.137, W= 0.327, W= 0.288 and W=0.453) for the conflicts variables, this suggests that the data diverge from a normal distribution.
In order to have an overview of the relationship between the different events variables and the SDG overall score, we make several graphs containing the Pearson correlation coefficient between the variable, the scatter plots describing the relationship between the variables, as well as the distribution of each variable.
Code
# Function to create the lower triangle of the correlation matrix
lower.panel <-
function(x, y, ...){
points(x,
y,
pch = 20,
col = Fix_color,
cex = 0.2)}
# Function to create the histogram of the plot
panel.hist <-
function(x, ...){
usr <- par("usr"); on.exit(par(usr))
par(usr = c(usr[1:2], 0, 1.5) )
h <- hist(x, plot = FALSE)
breaks <- h$breaks; nB <- length(breaks)
y <- h$counts; y <- y/max(y)
rect(breaks[-nB], 0, breaks[-1], y, col = Fix_color, ...)}
# panel.cor_stars function with stars alongside correlation coefficients
panel.cor_stars <-
function(x, y, digits = 2, prefix = "", cex.cor, ...) {
usr <- par("usr"); on.exit(par(usr))
par(usr = c(0, 1, 0, 1))
r <- cor(x, y)
p_value <- cor.test (x,y)$p.value
if (p_value < 0.001){
stars <- "***"}
else if (p_value < 0.01) {
stars <- "**"}
else if (p_value < 0.05) {
stars <- "*"} else {
stars <- ""}
txt <-
paste0(format(c(r, 0.123456789),
digits = digits)[1],
" ",
stars)
if(missing(cex.cor)) cex.cor <- 0.5/strwidth(txt)
text(0.5,
0.5,
txt,
cex = cex.cor)}Code
#### Correlation table and distribution of Disaster variables ####
pairs(data_question3_1[, c("overallscore",
"total_affected",
"total_deaths")],
upper.panel = panel.cor_stars,
diag.panel = panel.hist,
lower.panel = lower.panel,
main = "Correlation table and distribution of Disaster variables")Meaning of the stars: *** : p_value < 0.001; ** : p_value < 0.01; *: p_value <0.05; no star if p_value is higher.
The different variables used to materialize the impact of climate disasters do not seem to have important influence on the overall score. Indeed, the overallscore and total_affected have a correlation coefficient that suggests a very weak negative linear relationship between this variables and which is not statistically significant (p ≥ 0.05), and the overallscore and total_deaths have a correlation that also indicates a weak negative linear relationship that is statistically significant at p < 0.05. But we will further explore for the different SDGs, since we believe that such disasters have a specific influence on some SDGs.
Code
#### Correlation table and distribution of COVID variables ####
pairs(data_question3_2[,c("overallscore",
"cases_per_million",
"deaths_per_million",
"stringency")],
upper.panel = panel.cor_stars,
diag.panel=panel.hist,
lower.panel = lower.panel,
main="Correlation table and distribution of COVID variables")Meaning of the stars: *** : p_value < 0.001; ** : p_value < 0.01; *: p_value <0.05; no star if p_value is higher.
The different variables used to materialize the impact of COVID19 do not seem to have important influence on the overall score, we can see that Overallscore and cases_per_million/deaths_per_million/stringency have a correlation coefficient indicating a weak positive linear relationship that is highly statistically significant at p < 0.001. But we will further explore for the different SDGs, since we believe that COVID19 had a specific influence on some SDGs, for instance “good health and well-being” or “decent work and economic growth”.
Concerning the correlation effect between the COVID19 variables, we could have no surprises, Cases_per_million and deaths_per_million have a moderate to strong positive correlation suggesting a stronger relationship where an increase in the number of COVID-19 cases per million is associated with a substantial increase in the number of deaths per million. This indicates a significant correlation between case prevalence and mortality rate. Cases_per_million and stringency have a moderate positive correlation indicates that higher levels of cases per million are associated with slightly higher severity of health measures. This could mean that in regions where cases are more numerous, stricter sanitary measures can be put in place to control the spread of the virus. Finally, Deaths_per_million and stringency have a strong positive correlation indicating a robust relationship where higher mortality rates are associated with higher severity of sanitary measures. This suggest that in areas where deaths are higher, stricter sanitary measures are applied in an attempt to reduce the spread of the virus and mortality.
Code
#### Correlation table and distribution of conflicts variables ####
pairs(data_question3_3[,c("overallscore",
"ongoing",
"sum_deaths",
"pop_affected",
"area_affected",
"maxintensity")],
upper.panel = panel.cor_stars,
diag.panel=panel.hist,
lower.panel = lower.panel,
main="Correlation table and distribution of conflicts variables")Meaning of the stars: *** : p_value < 0.001; ** : p_value < 0.01; *: p_value <0.05; no star if p_value is higher.
Negative values (ranging from -0.17 to -0.28) with three stars (***) indicate a strong and statistically significant negative correlation between the overall index (Overallscore) and the various conflict-related variables (Ongoing, sum_deaths, pop_affected, area_affected, maxintensity). A strong negative correlation means that an increase in the Overallscore is associated with a decrease in the values of the other variables. But we have to take into account that correlation does not necessarily imply direct causation.
To explore our data on events such as disasters, covid-19 and conflicts we have to first see which regions are the most touched by these. To do so, we made time-series analysis on this three events each time depending on different variables.
Code
#### Date format conversion ####
# Converted 'year' column to date format
Q3.1$year <- as.numeric(format(as.Date(as.character(Q3.1$year),
format = "%Y"), "%Y"))
Q3.2$year <- as.numeric(format(as.Date(as.character(Q3.2$year),
format = "%Y"), "%Y"))
Q3.3$year <- as.numeric(format(as.Date(as.character(Q3.3$year),
format = "%Y"), "%Y"))These is our time-analysis concerning climatic disasters with total affected per region and with total deaths per region.
We can see that the regions the most affected by the climate disasters are East Asia, North America and South Asia. We will concentrate more in this regions and that the regions the most affected by the climate disasters are East Asia, Latin America & the Caribbean and South Asia.
Code
#### Trend of Climatic Disasters Variables Over Time graph ####
library(ggplot2)
library(plotly)
# Replace all missing values with 0
Q3.1[is.na(Q3.1)] <- 0
# Create a combined ggplot
combined_ggplot <- ggplot() +
geom_smooth(data = Q3.1, aes(x = year, y = total_affected, group = region, color = region),
method = "loess", se = FALSE, span = 0.7, size = 0.5) +
geom_smooth(data = Q3.1, aes(x = year, y = total_deaths, group = region, color = region),
method = "loess", se = FALSE, span = 0.7, size = 0.6) +
facet_wrap(~ region, nrow = 5, scales = "fixed") +
labs(x = "Year", y = "") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
strip.text = element_text(size = 8),
panel.spacing = unit(0.5, "lines"),
plot.title = element_text(hjust = 0.5),
legend.position = "none")
# Convert the ggplot to a Plotly object
combined_plotly <- ggplotly(combined_ggplot, dynamicTicks = TRUE)
# Function to create a visibility vector
make_visibility_vector <- function(total_traces, position) {
c(rep(position == 1, total_traces / 2),
rep(position == 2, total_traces / 2))
}
# Add interactive buttons to the Plotly layout
combined_plotly <- combined_plotly %>%
layout(
title = "Trend of Total Affected and Deaths from Climatic Disasters Over Time",
updatemenus = list(
list(
type = "buttons",
direction = "down",
showactive = TRUE,
buttons = list(
list(method = "restyle",
args = list("visible", make_visibility_vector(length(combined_plotly$x$data), 1)),
label = "Total Affected"),
list(method = "restyle",
args = list("visible", make_visibility_vector(length(combined_plotly$x$data), 2)),
label = "Total Deaths")
)
)
),
showlegend = FALSE
)
# Display the interactive plot
combined_plotlyThese is our time-analysis concerning the COVID-19 cases per million, the COVID-19 deaths per million and the COVID-19 stringency by region between end 2018 and 2022.
Code
#### Trend of COVID-19 Variables Over Time graph ####
library(ggplot2)
library(plotly)
# Assuming conflicts_filtered is already defined
covid_filtered <- Q3.2[Q3.2$year >= 2018, ]
# Create a combined ggplot
combined_ggplot <- ggplot() +
geom_smooth(data = covid_filtered, aes(x = year, y = cases_per_million, group = region, color = region),
method = "loess", se = FALSE, span = 0.3, size = 0.6) +
geom_smooth(data = covid_filtered, aes(x = year, y = deaths_per_million, group = region, color = region),
method = "loess", se = FALSE, span = 0.3, size = 0.6) +
geom_smooth(data = covid_filtered, aes(x = year, y = stringency, group = region, color = region),
method = "loess", se = FALSE, span = 0.3, size = 0.6) +
facet_wrap(~ region, nrow = 5) +
labs(x = "Year", y = "") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1),
axis.text.y = element_text(hjust = 1),
strip.text = element_text(size = 8),
panel.spacing = unit(0.5, "lines"),
plot.title = element_text(hjust = 0.5),
legend.position = "none")
# Convert the ggplot to a Plotly object
combined_plotly <- ggplotly(combined_ggplot, dynamicTicks = TRUE)
# Function to create a visibility vector
make_visibility_vector <- function(total_traces, position) {
c(rep(position == 1, total_traces / 3),
rep(position == 2, total_traces / 3),
rep(position == 3, total_traces / 3))
}
# Add interactive buttons to the Plotly layout
combined_plotly <- combined_plotly %>%
layout(
title = "Trend of COVID-19 Over Time",
updatemenus = list(
list(
type = "buttons",
direction = "down",
showactive = TRUE,
buttons = list(
list(method = "restyle",
args = list("visible", make_visibility_vector(length(combined_plotly$x$data), 1)),
label = "Cases per Million"),
list(method = "restyle",
args = list("visible", make_visibility_vector(length(combined_plotly$x$data), 2)),
label = "Deaths per Million"),
list(method = "restyle",
args = list("visible", make_visibility_vector(length(combined_plotly$x$data), 3)),
label = "Stringency")
)
)
),
showlegend = FALSE
)
# Display the interactive plot
combined_plotlyThese is our time-analysis concerning conflicts deaths, conflicts affected population and maxintensity conflicts per region between 2000 and 2016
We can see that the regions the most affected by the deaths are : Middle east and north Africa, Sub-Saharan Africa, South Asia, then very less America & the Caribbean and Eastern Europe. The regions the most affected by the affected population are : Middle east and north Africa, Sub-Saharan Africa, South Asia, Latin America & the Caribbean and Caucasus and Central Asia. Finally, the regions having a maxintensity in conflicts are : Caucasus and Central Asia, Eastern Europe, Middle east and north Africa, Sub-Saharan Africa, North America, South Asia, East Asia just at one precise moment, Latin America & the Caribbean less.
Code
#### Trend of Conflicts Variables Over Time graph ####
library(ggplot2)
library(plotly)
conflicts_filtered <- Q3.3[Q3.3$year >= 2000 & Q3.3$year <= 2016, ]
combined_ggplot <- ggplot() +
geom_smooth(data = conflicts_filtered, aes(x = year, y = sum_deaths, group = region, color = region),
method = "loess", se = FALSE, span = 0.3, size = 0.6) +
geom_smooth(data = conflicts_filtered, aes(x = year, y = pop_affected, group = region, color = region),
method = "loess", se = FALSE, span = 0.3, size = 0.6) +
geom_smooth(data = conflicts_filtered, aes(x = year, y = maxintensity, group = region, color = region),
method = "loess", se = FALSE, span = 0.3, size = 0.6) +
facet_wrap(~ region, nrow = 5) +
labs(x = "Year", y = "") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1),
axis.text.y = element_text(hjust = 1),
strip.text = element_text(size = 8),
panel.spacing = unit(0.5, "lines"),
plot.title = element_text(hjust = 0.5),
legend.position = "none")
combined_plotly <- ggplotly(combined_ggplot, dynamicTicks = TRUE)
make_visibility_vector <- function(total_traces, position) {
c(rep(position == 1, total_traces / 3),
rep(position == 2, total_traces / 3),
rep(position == 3, total_traces / 3))
}
combined_plotly <- combined_plotly %>%
layout(
title = "Trend of Conflicts Over Time",
updatemenus = list(
list(
type = "buttons",
direction = "down",
showactive = TRUE,
buttons = list(
list(method = "restyle",
args = list("visible", make_visibility_vector(length(combined_plotly$x$data), 1)),
label = "Deaths by Conflicts"),
list(method = "restyle",
args = list("visible", make_visibility_vector(length(combined_plotly$x$data), 2)),
label = "Population Affected"),
list(method = "restyle",
args = list("visible", make_visibility_vector(length(combined_plotly$x$data), 3)),
label = "Maxintensity")
)
)
),
showlegend = FALSE
)
combined_plotlyNow that we could visualize which regions are the most impacted by these three events we can do correlations analysis per region to see if this events have indeed an impact on the evolution of SDG goals.
3.4.1 Focus on the correlation between the SDG scores and the different events.
Here you can see an extract of: - Our correlation map between the climate disasters and the SDG goals in South and East Asia and North America as it was the regions that where the most impacted. We conclude that climate disasters do not really have a big impact on SDG scores per region. - Our correlation map between the COVID-19 and the SDG goals only during COVID-19 time. We have the same conclusions, it is still not significant, and that’s surprising. - Our correlation map between the conflicts deaths and the SDG goals in Middle East & North Africa, Sub-Saharan Africa, South Asia, Latin America & the Caribbean as it was the regions that where the most impacted. - Our correlation between conflicts affected population and the SDG goals only for the Middle East & North Africa, Sub-Saharan Africa, South Asia, Latin America & the Caribbean and Caucasus & Central Asia. - Our correlation between the regions having a maxintensity in conflicts and the SDG goals only for Caucasus & Central Asia, Eastern Europe, Middle east & north Africa, Sub-Saharan Africa, North America, South Asia, East Asia, Latin America & the Caribbean.
Code
disaster_data <-
Q3.1[Q3.1$region %in% c("South Asia", "East Asia", "North America"), ]
relevant_columns <-
c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8",
"goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16",
"total_affected", "total_deaths")
subset_data <- disaster_data[, relevant_columns]
correlation_matrix_subset <-
cor(subset_data[, c("total_affected", "total_deaths")],
subset_data,
method = "spearman")
cor_melted <- reshape2::melt(correlation_matrix_subset)
names(cor_melted) <- c("Variable2", "Variable1", "Correlation")
#### Correlation between the climate disasters and the SDG goals in South and East Asia and North America heatmap ####
ggplot(data = cor_melted,
aes(Variable1,
Variable2,
fill = Correlation)) +
geom_tile(width = 1.05,
height = 1.05) +
scale_fill_viridis_c(name = "Spearman\nCorrelation",
begin = 0,
limit = c(-1, 1)) +
theme_minimal() +
theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
plot.title = element_text(size = 11, margin = margin(b = 20), hjust = 0.5,
vjust = 3.5, lineheight = 1),
legend.title = element_text(size = 9)) +
coord_fixed() +
labs(x = '',
y = '',
title = 'Correlation between the climate disasters and the SDG goals\nin South and East Asia and North America')
#### Correlation between COVID and the SDG goals heatmap ####
covid_filtered <- Q3.2
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6",
"goal7", "goal8", "goal9", "goal10", "goal11", "goal12",
"goal13", "goal15", "goal16", "stringency",
"cases_per_million", "deaths_per_million")
subset_data <- covid_filtered[, relevant_columns]
correlation_matrix_Covid <-
cor(subset_data,
subset_data[, c("stringency",
"cases_per_million",
"deaths_per_million")],
method = "spearman")
cor_melted <- as.data.frame(as.table(correlation_matrix_Covid))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")
ggplot(data = cor_melted,
aes(Variable1,
Variable2,
fill = Correlation)) +
geom_tile(height = 1.05,
width = 1.05) +
scale_fill_viridis_c(name = "Spearman\nCorrelation",
begin = 0,
limit = c(-1, 1)) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
plot.title = element_text(size = 11, margin = margin(b = 20),
hjust = 0.5,
vjust = 3,5,
lineheight = 1),
legend.title = element_text(size = 9)) +
coord_fixed() +
labs(x = '',
y = '',
title = 'Correlation between COVID and the SDG goals')
#### Correlation between Conflicts Deaths and the SDG goals heatmap ####
conflicts_filtered <-
Q3.3[Q3.3$region %in% c("Middle East & North Africa", "Sub-Saharan Africa",
"South Asia", "Latin America & the Caribbean"), ]
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6",
"goal7", "goal8", "goal9", "goal10", "goal11", "goal12",
"goal13", "goal15", "goal16", "sum_deaths")
subset_data <- conflicts_filtered[, relevant_columns]
correlation_matrix_Conflicts_Deaths <-
cor(subset_data,
subset_data[, c("sum_deaths")],
method = "spearman")
cor_melted <- as.data.frame(as.table(correlation_matrix_Conflicts_Deaths))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")
ggplot(data = cor_melted,
aes(Variable1,
Variable2,
fill = Correlation)) +
geom_tile(width = 1.05,
height = 1.05) +
scale_fill_viridis_c(name = "Spearman\nCorrelation",
begin = 0,
limit = c(-1, 1)) +
theme_minimal() +
theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
plot.title = element_text(size = 11, margin = margin(b = 20), hjust = 0.5,
vjust = 6, lineheight = 2),
legend.title = element_text(size = 9)) +
coord_fixed(ratio = 1) +
labs(x = '',
y = 'Deaths',
title = 'Correlation between Conflicts Deaths and the SDG goals')
#### Correlation between Conflicts Affected Population and the SDG goals ####
conflicts_filtered <-
Q3.3[Q3.3$region %in% c("Middle East & North Africa", "Sub-Saharan Africa",
"South Asia", "Latin America & the Caribbean",
"Caucasus & Central Asia"), ]
relevant_columns <-
c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8",
"goal9", "goal10", "goal11", "goal12", "goal13", "goal15",
"goal16", "pop_affected")
subset_data <- conflicts_filtered[, relevant_columns]
correlation_matrix_Conflicts_Pop_Aff <-
cor(subset_data,
subset_data[, c("pop_affected")],
method = "spearman")
cor_melted <- as.data.frame(as.table(correlation_matrix_Conflicts_Pop_Aff))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")
ggplot(data = cor_melted,
aes(Variable1,
Variable2,
fill = Correlation)) +
geom_tile(width = 1.05,
height = 1.05) +
scale_fill_viridis_c(name = "Spearman\nCorrelation",
begin = 0,
limit = c(-1, 1)) +
theme_minimal() +
theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
plot.title = element_text(size = 11, margin = margin(b = 20),
hjust = 0.5, vjust = 6, lineheight = 2),
legend.title = element_text(size = 9)) +
coord_fixed(ratio = 1) +
labs(x = '',
y = 'Affected Population',
title = 'Correlation between Conflicts Affected Population and the SDG goals')
#### Correlation between Maxintensity in Conflicts and the SDG goals ####
conflicts_filtered <-
Q3.3[Q3.3$region %in% c("Middle East & North Africa", "Sub-Saharan Africa",
"South Asia", "Latin America & the Caribbean",
"Caucasus & Central Asia", "Eastern Europe",
"North America", "East Asia"), ]
relevant_columns <-
c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8",
"goal9", "goal10", "goal11", "goal12", "goal13", "goal15",
"goal16", "maxintensity")
subset_data <- conflicts_filtered[, relevant_columns]
correlation_matrix_Conflicts_Maxintensity <-
cor(subset_data,
subset_data[, c("maxintensity")],
method = "spearman")
cor_melted <- as.data.frame(as.table(correlation_matrix_Conflicts_Maxintensity))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")
ggplot(data = cor_melted,
aes(Variable1,
Variable2,
fill = Correlation)) +
geom_tile(width = 1.05,
height = 1.05) +
scale_fill_viridis_c(name = "Spearman\nCorrelation",
begin = 0,
limit = c(-1, 1)) +
theme_minimal() +
theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
axis.text.y = element_text(vjust = 0, size = 8, hjust = 1),
plot.title = element_text(size = 11, margin = margin(b = 20),
hjust = 0.5, vjust = 6, lineheight = 2),
legend.title = element_text(size = 9)) +
coord_fixed(ratio = 1) +
labs(x = '',
y = 'Maxintensity',
title = 'Correlation between Maxintensity in Conflicts and the SDG goals')After looking at almost the same results, we asked ourselves if the fact that we do not see any correlations is because the consequences of this disasters arrive later on, so we decided to remake the same correlations with 1 year gap.
3.4.2 Correlations for each event with one year gap
Here you can see for example our correlation map between the climate disasters and the SDG goals in South and East Asia and North America with one year gap.
Code
#### Correlation between the climate disasters and the SDG goals in South and East Asia with 1 year gap ####
disaster_data <-
Q3.1[Q3.1$region %in% c("South Asia", "East Asia", " North America"), ]
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6",
"goal7", "goal8", "goal9", "goal10", "goal11", "goal12",
"goal13", "goal15", "goal16", "total_affected",
"total_deaths")
subset_data <- disaster_data[, relevant_columns]
lagged_subset_data <- subset_data %>%
mutate(
lagged_total_affected = lag(total_affected, default = NA),
lagged_total_deaths = lag(total_deaths, default = NA)
)
correlation_matrix_lagged <-
cor(lagged_subset_data[, c("lagged_total_affected",
"lagged_total_deaths")],
subset_data,
method = "spearman")
cor_melted_lagged <- reshape2::melt(correlation_matrix_lagged)
names(cor_melted_lagged) <- c("Variable2", "Variable1", "Correlation")
ggplot(data = cor_melted_lagged,
aes(Variable1,
Variable2,
fill = Correlation)) +
geom_tile() +
scale_fill_viridis_c(name = "Spearman\nCorrelation",
begin = 0,
limit = c(-1, 1)) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, size = 7, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 7, hjust = 1),
legend.title = element_text(size = 6),
plot.title = element_text(hjust = 0.5,
size = 11,
margin = margin(b = 15)),
legend.key.size = unit(0.4, "cm"),
legend.text = element_text(size = 6)) +
coord_fixed() +
labs(x = '',
y = '',
title = 'Correlation between the climate disasters \nand the SDG goals in South and East Asia with 1 year gap')Even with a year gap it doesn’t seem that climate disaster with such consequences as the population that gets affected and dies has an impact on the SDG scores as we would have though. But we are still a little bit optimistic and though why not look at the correlations with a gap year over the years.
3.4.3 Interactive map of the correlation between the different events and the SDG goals with 1 year gap.
Here you can see an extrat of our interactive map of the correlation between the climate disasters and the SDG goals in South, East Asia and North America with 1 year gap. To better understand the results, if we select a specific year (e.g., 2020) in the app, the analysis will show correlations between the SDG scores for the selected year (e.g., 2020) and the disaster-related variables (total_affected and total_deaths) from the previous year (e.g., 2019). Unforthunatly, our shiny app is not supported in static R Markdown documents. For more details you can see our interactive maps in the “See Interactive” Folder, our document called “Interactive_Matrix_Plots.

here you can see the correlation between COVID-19 and the SDG goals with 1 year gap. And strangely, instead of having a negative correlation, we expected that the more cases and deaths happened because of COVID-19, the scores of the SDG would be negatively affected,but with the gap year we can see that the scores of the Goal3, Goal6, Goal9 and Goal16 are quite positively impacted by the COVID-19.
Code
Q3.2 <- Q3.2 %>%
arrange(code,year)%>%
group_by(code)
library(ggplot2)
library(reshape2)
library(dplyr)
Q3.2 <- read.csv(here("scripts", "data", "data_question3_2.csv"))
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6",
"goal7", "goal8", "goal9", "goal10", "goal11", "goal12",
"goal13", "goal15", "goal16", "stringency",
"cases_per_million", "deaths_per_million")
# Function to generate a lagged correlation matrix for a given year
generate_lagged_correlation_plot_covid <- function(year, data, relevant_columns) {
# Filter data for the specified year
current_year_data <- data[data$year == year, relevant_columns]
# Lag the relevant COVID columns
lagged_data <- current_year_data %>%
mutate(
lagged_stringency = lag(stringency, default = NA),
lagged_cases_per_million = lag(cases_per_million, default = NA),
lagged_deaths_per_million = lag(deaths_per_million, default = NA)
) %>%
select(-stringency, -cases_per_million, -deaths_per_million) # Exclude non-lagged variables
# Calculate the correlation matrix
correlation_matrix <- cor(lagged_data[, c("lagged_stringency", "lagged_cases_per_million", "lagged_deaths_per_million")],
lagged_data, method = "spearman")
# Melt the correlation matrix for ggplot
cor_melted <- melt(correlation_matrix)
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")
# Create the ggplot
p <- ggplot(cor_melted,
aes(Variable2,
Variable1,
fill = Correlation)) +
geom_tile() +
scale_fill_viridis_c(name = "Spearman\nCorrelation",
limit = c(-1, 1)) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, size = 7, hjust = 1),
axis.text.y = element_text(vjust = 1, size = 7),
legend.title = element_text(size = 6),
plot.title = element_text(hjust = 0.5, size = 11, margin = margin(b = 15)),
legend.key.size = unit(0.4, "cm"),
legend.text = element_text(size = 6)) +
coord_fixed() +
labs(x = '',
y = '',
title = paste('Lagged Correlation for COVID-19 in Year', year))
return(p)}
# Generate and display the plots for 2020, 2021, and 2022
plot_2020 <- generate_lagged_correlation_plot_covid(2020,
Q3.2,
relevant_columns)
plot_2021 <- generate_lagged_correlation_plot_covid(2021,
Q3.2,
relevant_columns)
plot_2022 <- generate_lagged_correlation_plot_covid(2022,
Q3.2,
relevant_columns)
plot_2020
plot_2021
plot_2022Finally, here you can see an extract of our interactive map of the correlation between respectively for the 3 different variables of the Conflict and the SDG goals with 1 year gap. Once again, you can look for more details to our document containing our interactive correlation matrices.

Here’s the extract of our interactive map of the correlation between the affected population in conflicts and the SDG goals with 1 year gap.

Here’s the extrant of our interactive map of the correlation between the Maxintensity in conflicts and the SDG goals with 1 year gap.

The results seems logic because if the SDG scores continue to go higher and the conflicts remains the same or finishes we get a negative correlation
Our last idea is to see the regressions between the SDG scores and the variables of each event that we thought interesting.
3.4.4 Regressions between the SDG scores and the events variables.
Let’s see the regressions for each score depending of each variable in the disasters dataset (total_affected and total_deaths)
Code
library(plotly)
library(dplyr)
library(ggplot2)
disaster_data <- Q3.1[Q3.1$region %in% c("South Asia", "East Asia", "North America"), ]
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6",
"goal7", "goal8", "goal9", "goal10", "goal11", "goal12",
"goal13", "goal15", "goal16", "total_affected",
"total_deaths")
subset_data <- disaster_data[, relevant_columns]
# Effectuer la régression linéaire et créer les graphiques
plots <- list()
for (goal in relevant_columns[1:15]) {
# Pour total_affected
p_affected <- ggplot(subset_data,
aes_string(x = "total_affected", y = goal)) +
geom_point(size = 0.1) +
geom_smooth(method = "lm",
formula = y ~ x,
se = FALSE,
size = 0.5) +
labs(x = "Total Affected",
y = paste("Goal",
substr(goal, 5, 6)))
# Pour total_deaths
p_deaths <- ggplot(subset_data,
aes_string(x = "total_deaths",
y = goal)) +
geom_point(size = 0.1) +
geom_smooth(method = "lm",
formula = y ~ x,
se = FALSE,
size = 0.5) +
labs(x = "Total Deaths",
y = paste("Goal",
substr(goal, 5, 6)))
# Ajouter les deux graphiques à la liste
plots[[goal]] <- list(ggplotly(p_affected),
ggplotly(p_deaths))
}
# Afficher tous les graphiques
subplot_plots <- lapply(plots, function(plot_pair) subplot(plot_pair[[1]],
plot_pair[[2]],
nrows = 1,
margin = 0.05))
subplot(subplot_plots,
nrows = length(subplot_plots),
margin = 0.05)Most relationships between the goals and the variables (‘total_affected’ and ‘total_deaths’) are not statistically significant (indicated by p-values > 0.05). More specifically, in several models, the coefficients for ‘total_affected’ and ‘total_deaths’ are small, indicating weak or negligible relationships with the respective goals. Some models have marginally significant p-values (close to 0.05) but still lack statistical significance. Goals 7, 8, 10, 13, 14, and 15 exhibit statistically significant relationships with ‘total_affected,’ indicating small to moderate positive relationships. Goals 7 and 8 also show statistically significant relationships with ‘total_deaths,’ indicating moderate negative relationships.
These findings suggest that, in most cases, the relationships between the specified goals and Climate disasters variables (total affected and total deaths) are not statistically significant. However, some goals do indicate small to moderate associations with these variables.
Now, we have made the same conclusions for the COVID-19 and Conflicts events, you can have a nice representation of each regression plot if you look at our Interactive_Matrix_Plots documents containing shiny apps, yo that you can have a closer look and choose which goals interests you the most.(“C:/DS-project/report/See Interactive/Interactive_Matrix_Plots.qmd”)
Indeed, for all Goals, the predictor variables (stringency, number of cases per million and number of deaths per million) show statistically significant relationships. However, when assessed individually, these predictors explain only a marginal fraction of the variance of the respective objectives, with explanatory percentages ranging from around 0.141% to 6.99%. Adjusted R-squared values consistently indicate limited explanatory power for these relationships, implying the influence of other factors not accounted for in the variations observed for each objective. Relying solely on rigor, cases per million and deaths per million results in modest predictive capabilities for each objective.
For all Goals, the predictor variables (stringency, number of cases per million and number of deaths per million) show statistically significant relationships. However, when assessed individually, these predictors explain only a marginal fraction of the variance of the respective objectives, with explanatory percentages ranging from around 0.141% to 6.99%. Adjusted R-squared values consistently indicate limited explanatory power for these relationships, implying the influence of other factors not accounted for in the variations observed for each objective. Relying solely on rigor, cases per million and deaths per million results in modest predictive capabilities for each objective.
In summary, the statistical significance of stringency, cases per million and deaths per million in relation to each objective is clear. However, these predictive variables individually fail to explain the variations observed, highlighting the need to explore additional variables or unexplored factors in order to significantly improve predictive ability for each respective objective.
Finally for the regressions for each SDG score depending of each variable in the Conflicts dataset (pop_affected and sum_deaths) we can say that all three predictors exhibited statistically significant relationships with the respective goals across the board. ‘Maxintensity’ generally demonstrated a relatively stronger association compared to ‘Population Affected’ and ‘Deaths’ in most analyses. But collectively, these predictors explained only a small to moderate portion of the variability observed in the different goals (adjusted R-squared ranging from approximately 1% to 9.48%). This suggests that there are other unaccounted factors not included in the analysis that significantly influence the outcomes of these goals.
To conclude, while ‘Population Affected,’ ‘Deaths,’ and ‘Maxintensity’ consistently showed significant associations with the various goals analyzed, their combined effect explained only a fraction of the variance observed in these goals. Therefore, there are likely additional crucial factors beyond these predictors that play substantial roles in influencing the outcomes of the respective goals.
All three predictors exhibited statistically significant relationships with the respective goals across the board. ‘Maxintensity’ generally demonstrated a relatively stronger association compared to ‘Population Affected’ and ‘Deaths’ in most analyses. But collectively, these predictors explained only a small to moderate portion of the variability observed in the different goals (adjusted R-squared ranging from approximately 1% to 9.48%). This suggests that there are other unaccounted factors not included in the analysis that significantly influence the outcomes of these goals. To conclude, while ‘Population Affected,’ ‘Deaths,’ and ‘Maxintensity’ consistently showed significant associations with the various goals analyzed, their combined effect explained only a fraction of the variance observed in these goals. Therefore, there are likely additional crucial factors beyond these predictors that play substantial roles in influencing the outcomes of the respective goals.
4 Conclusion
4.1 Take home message
In conclusion, our data analytics research project reveals a gradual global progress in achieving the Sustainable Development Goals (SDGs), with notable variations across continents. While Europe emerges as a leader in SDG accomplishment, Africa lags behind, and Oceania demonstrates diverse performance trends.
The interconnectedness of SDGs is evident, with high achievement in one goal often correlating with success in others, except for goals 12 and 13 related to climate action, which show the inverse trend. Influential factors such as higher internet usage and legal economic freedom are positively associated with SDG scores.The analysis indicates that the impact of climate disasters on SDG achievement is notably weak, with limited associations found between climate-related variables and specific goals. Although variables related to Covid-19 and conflicts show significance in SDG achievement, they explain limited variance for each objective.
As one might expect, SDG adoption fosters increased partnerships over the goals (goal 17), while Goal 9, focusing on industry, innovation, and infrastructure, exhibits a faster rate of advancement compared to other goals, being before or after 2015. Overall, our findings underscore the complex dynamics influencing global progress on SDGs and emphasize the need for continued efforts and strategic partnerships to address the interconnected challenges and disparities across different regions and goals.
4.2 Limitations
We had to delete several countries due to a high percentage of missing values, which means that some interesting countries are not analyzed for every research question. For example we removed Afghanistan and Somalia from the data used to analyze the different factors impacting the SDG scores.
In addition, we chose some variables as events and factors for our analysis based on our knowledge and ideas on what would have the most interesting impacts, but there are an infinity of possibilities. It is thus difficult to know if we focused on the most important variables to explain the different SDG scores across the world.
Finally, following the regression analyses on our SDG score, we have been able to gain a comprehensive understanding of how different factors impact our SDG. However, due to the low explanatory power of some of the regression models probably cause by non significant relationship between the chosen independent variables and some goals Omitted variable bias or over-fitting, we must exercise caution when drawing any conclusions from these results.
4.3 Future work
Our research serves as an overview of the different characteristics of a country, as well as uncontrollable events that can affect the 17 SDG scores. In this regard, future work could investigate other factors’ effects such as the type of country governance (dictatorship, democracy, etc.) or the tourism rate. It could additionally focus on other events, including financial crises and elections.
We kept our analysis on the goal level, however, each goal has several targets and indicators that help keep track of the achievement and set subgoals. It would be interesting to dive in each goal separately to study more specifically these different targets and indicators in order to get a more precise view of what aspect of each goal is impacted by the different factors.
5 References
Allen, C., Metternicht, G. and Wiedmann, T. (2018) ‘Initial progress in implementing the sustainable development goals (sdgs): A review of evidence from countries’, Sustainability Science, 13(5), pp. 1453–1467. doi:10.1007/s11625-018-0572-3.
Çağlar, M. and Gürler, C. (2021) ‘Sustainable development goals: A cluster analysis of worldwide countries’, Environment, Development and Sustainability, 24(6), pp. 8593–8624. doi:10.1007/s10668-021-01801-6.
Grover, P. et al. (2021) ‘Influence of political leaders on Sustainable Development Goals – insights from Twitter’, Journal of Enterprise Information Management, 34(6), pp. 1893–1916. doi:10.1108/jeim-07-2020-0304.
Pereira, P. et al. (2022) ‘The Russian‐Ukrainian Armed Conflict Will Push Back the Sustainable Development Goals’, Geography and Sustainability, 3(3), pp. 277–287. doi:10.1016/j.geosus.2022.09.003.
Pradhan, P. et al. (2017) ‘A systematic study of Sustainable Development Goal (SDG) interactions’, Earth’s Future, 5(11), pp. 1169–1179. doi:10.1002/2017ef000632.
Rassanjani, S. (2018) ‘Ending poverty: Factors that might influence the achievement of Sustainable Development Goals (sdgs) in Indonesia’, Journal of Public Administration and Governance, 8(3), p. 114. doi:10.5296/jpag.v8i3.13504.
Shulla, K. et al. (2021) ‘Effects of covid-19 on the Sustainable Development Goals (sdgs)’, Discover Sustainability, 2(1). doi:10.1007/s43621-021-00026-x.
Sompolska-Rzechuła, A. and Kurdyś-Kujawska, A. (2021) ‘Towards understanding interactions between sustainable development goals: The role of climate-well-being linkages. experiences of EU countries’, Energies, 14(7), p. 2025. doi:10.3390/en14072025.